8h30-9h30 | Registration - Agora | |||||
9h30-9h45 | Welcome: Olivier GAUDOIN (Room A) | |||||
9h45-10h30 | Francisco J. SAMANIEGO System Signatures: A 30-year Retrospective The signature vector of a coherent system in iid components is defined and its basic properties are discussed. In particular, its distribution-free character is noted, a signature-related closure theorem for systems in iid IFR components (my first "signature accomplishment") is exhibited and a signature-based representation theorem for a system’s reliability function is presented. Several preservation results are presented which demonstrate that a signature vector’s stochastic characteristics are preserved in the system’s reliability function. A proof of the "No Internal Zeros" property of system signatures is sketched. The signature of a communication network is defined, and the relationship between the domination vector (the vector of coefficients of the reliability polynomial) and the signature vector is identified. The utility of network signatures in comparing the performance of competing networks is illustrated. It is shown that uniformly optimal networks within classes of networks of a fixed dimension can be identified relative to the "stochastic precedence" ordering. Dynamic signatures are defined, and their utility in the comparison of used systems is illustrated. An expression for the joint reliability of systems with shared components is obtained in terms of the bivariate "signature matrix". Three scenarios are treated in which the component reliability function $F(t)$ is estimated from system failure-time data. When the system design is unknown, a consistent estimator of $F$ is obtained using autopsy data for estimating the system’s signature vector. Finally, the comparison of systems with independent, heterogeneous components is treated using a recent generalization of system signatures. System Signatures: A 30-year Retrospective Chair: Bo H. LINDQVIST, Room A | |||||
10h30-11h00 | Coffee break | |||||
11h00-12h30 |
Stochastic Ordering and Dependence
Stochastic Ordering and Dependence Francisco Germán BADíA, Carmen SANGUESA, Ji Hwan CHA
This work provides multivariate stochastic comparisons between the epoch times of trend renewal processes with the same baseline renewal processes. Furthermore multivariate dependence concepts and ageing properties are studied for the epoch times of trend renewal process. Some applications are developed. Stochastic comparisons and multivariate dependence for the epoch times of trend renewal processes
Fabio L. SPIZZICHINO
For bivariate exchangeable time-homogeneous load-sharing models (also known
as Ross models), we think of two exchangeable non-negative random variables
that are interpreted as the life-times of two similar and stochastically
dependent units. Concerning their joint probability distribution, we assume in
particular an exchangeable time-homogeneous load-sharing model. For
such a specific class of survival models, we analyze properties of
one-dimensional ageing of marginals, of bivariate ageing, and point out
special relations existing among such properties. The notions of bivariate
ageing that we consider emerge in a Bayesian-based stand-point, and are
defined in terms of (univariate) stochastic orderings between residual
life-times of the two units, conditionally on the knowledge of their
different ages. On the relations between Marginal and Bivariate Ageing for Exchangeable, Time-homogeneous, Load-Sharing Models
Félix BELZUNCE, Carolina MARTíNEZ-RIQUELME, José A. MERCADER, José M. RUIZ
The purpose of this paper is to study the role of the relevation transform, where a failed unit is replaced by a used unit with the same age as the failed one, as an alternative to the usual renewal policy. In particular, we compare the stochastic processes arising from a renewal policy and from a unit which is being continuously subjected to a relevation policy. We also consider the problem of where to allocate a relevation to increase the reliability of a system. The relevation transform: Comparison of policies and allocation of a relevation
|
Advanced Mathematical Methods in System Reliability and Maintenance - ORSJ 1
Advanced Mathematical Methods in System Reliability and Maintenance - ORSJ 1 Shinji INOUE, Shigeru YAMADA
We discuss Markovian software reliability modeling with the effects of change-point and imperfect debugging environment. Testing-time when the characteristic of the software failure-occurrence or fault-detection phenomenon changes notably is called change-point. Considering the effect at change-point on software reliability growth process must be important to improve the accuracy of software reliability assessment. And, assuming imperfect debugging activities in software reliability modeling contributes to reflecting more actual situation of debugging activities. We also show numerical illustration of our model for software reliability analysis by using actual data. Markovian Imperfect Debugging Modeling for Software Reliability Assessment with Change-Point
Lu JIN, Tomofumi UWANO, Kazuyuki SUZUKI
An integrated operation and maintenance policy with flexible load sharing is proposed for multiple-component deteriorating systems under a constant total workload. The underlying deterioration process of the system, which depends on the workload allocation, is described by a discrete-time Markov chain. The decision-making problem is formulated as a Markov decision process that minimizes the total expected cost (both operation and maintenance costs) on an infinite horizon. The properties of the resulting optimal decision policies are investigated, and a set of sufficient conditions for a monotone policy to be optimal are provided. The efficiency of the proposed integrated operation and maintenance policy with flexible load sharing is demonstrated through a numerical example. Operation and Maintenance Policy with Flexible Load Sharing
Hiroyuki OKAMURA, Tadashi DOHI
This paper discusses the computation of quasi-stationary distribution for continuous-time Markov chain (CTMC). The quasi-stationary distribution is defined as a left eigenvector of an infinitesimal generator of the CTMC with absorbing states. Compared to the computation of steady-state probability vector of CTMC, the computation cost of quasi-stationary distribution is much higher. In the paper, we introduce an iterative approach to obtain the quasi-stationary distribution, which is similar to Gauss-Seidel algorithm for the computation of steady-state probability vector. A note on computation of quasi-stationary distribution in continuous-time Markov chains
|
Bayesian Inference for Degradation Models
Bayesian Inference for Degradation Models Christine MUELLER, Simone HERMANN, Katja ICKSTADT
A general Bayesian approach for stochastic versions of deterministic growth models is presented to provide predictions for crack propagation in an early stage of the growth process. To improve the prediction, the information of other crack growth processes is used in a hierarchical (mixed-effects) model. Two stochastic versions of a deterministic growth model are compared. One is a nonlinear regression setup where the trajectory is assumed to be the solution of an ordinary differential equation with additive errors. The other is a diffusion model defined by a stochastic differential equation where increments have additive errors. While Bayesian prediction is known for hierarchical models based on nonlinear regression, we propose a new Bayesian prediction method for hierarchical diffusion models. Six growth models for each of the two approaches are compared with respect to their ability to predict the crack propagation in a large data example. Surprisingly, the stochastic differential equation approach has no advantage concerning the prediction compared with the nonlinear regression setup, although the diffusion model seems more appropriate for crack growth. Bayesian Prediction of Crack Growth Based on a Hierarchical Diffusion Model
Silvia RODRíGUEZ-NARCISO, J. Andrés CHRISTEN
In this work we propose a sequential analysis in order to get the inference of a quantile of the time to failure from a degradation proces; establishing the optimal sampling times. The sequential analysis builds an index based on the expected discrepancy between the estimated sampling quantile and its predicted value, via a Monte Carlo method. The sequential analysis is
implemented for actual degradation data arising in color degradation from stove sheets obtained in Mabe Laboratory, an appliances company in Mexico. OPTIMAL SEQUENTIAL BAYESIAN ANALYSIS FOR DEGRADATION TESTS IN LINEAR MODELS
Karine BERTIN, Rolando DE LA CRUZ, Cristian MEZA, Christian PAROISSIN
When fitting real data to a degradation model, it occurs frequently that individual heterogeneity has to be taken into account. Only few degradation models with random effects have been studied in the literature and most of them are parametric. Hence, in this paper, we propose a non-parametric Bayesian degradation model based on the gamma process and using the Dirichlet process. On some Bayesian approaches for gamma process
|
Multi-State System Reliability - 1
Multi-State System Reliability - 1 Yingyi LI, Ying CHEN, Rui KANG
Phased-mission multi-state systems (PM-MSSs) contain features of multi-state systems and phased-mission systems simultaneously. Because of the complexity of PM-MSSs, they enjoy barren research findings despite their universality in the real-world systems. In this paper, a method, which on the level of the failure mechanism, are proposed to model and evaluate the reliability of PM-MSSs with tools of some modified BDD and MMDD models. A case study is utilized to illustrate this method in details and some reliability curves are obtained at the end of this paper as well, which proves the effectiveness and availability of this method. Reliability analysis of phased mission multi-state systems based on failure mechanism accumulation method
Yuchang MO, Zhao ZHANG, Lirong CUI
In linear wireless sensor networks (LWSN), all the sensor nodes are arranged in a straight line. LWSN as a part of monitoring systems can assess the health status of a linear infrastructure structure such as bridges, highways, pipelines, etc. To achieve the high-reliable infrastructure monitoring services, LWSN with hybrid structure are designed where a limited number of support nodes are used to transfer information. The modeling and analysis of LWSN with hybrid structure remains a challenging task due to its inherent complexity. In this work, a MDD-based analytical approach is proposed to evaluate the probability that the LWSN performs at a particular performance level characterized in terms of the number of sensor nodes being able to reach the base station node. Particularly, novel procedures are proposed to construct MDD models. A case study is further presented to substantiate the application of the proposed MDD approach for developing optimal strategy on support node allocation to guarantee the reliability requirement on the infrastructure monitoring services. ANALYZING LINEAR WIRELESS SENSOR NETWORKS WITH BACKUP SUPPORT NODES
Vlad Stefan BARBU, Nicolas VERGNE
In this work we focus on multi state systems modelled by means of a particular class of stochastic processes called drifting Markov processes; we investigate associated reliability/survival indicators and estimate these quantities under various statistical schemes. Typical tools for studying the evolution and performance of such systems are the Markov and semi-Markov processes (cf. Sadek and Limnios, 2002; Limnios and Ouhbi, 2006; Barbu and Limnios, 2008). A hypothesis used in many mathematical models built as modelling tools for real applications is the homogeneity with respect to time; clearly, in many applications this homogeneity is inappropriate. But, from a practical point of view, considering general non-homogeneous processes could be inappropriate. A possible solution is to consider a non-homogeneity that is ``smooth'', controlled, of a known shape. An example of this type in a Markov framework consists in these drifting Markov chains (cf. Vergne 2008). For these processes, the Markov transition matrix is a linear (polynomial) function of two (several) Markov transition matrices. Thus we obtain the desired ``smooth'' non-homogeneity. Reliability and survival analysis for drifting Markov models: modelling and estimation
|
Advances in Step-Stress Modeling
Advances in Step-Stress Modeling Stefan BEDBUR, Udo KAMPS
A multi-sample model for general step-stress experiments based on sequential order statistics is proposed and analyzed, where the numbers of observations under each stress level are pre-specified. Contrary to common step-stress approaches, an arbitrary absolutely continuous lifetime distribution can be chosen, and, due to the experimental design of repeated Type-II censoring, the existence of maximum likelihood estimators for the parameters of interest is always guaranteed. Inferential methods for the model parameters associated with the stress levels are shown, and simulation studies illustrate some of the statistical procedures. Step-Stress Testing with Multiple Samples Under Repeated Type-II Censoring
Maria KATERI
In some step-stress accelerated life test (SSALT) experiments, a continuous monitoring of the tested items is infeasible and only their inspection at particular time points is possible. The available information is then the number of failures in specific time intervals (interval monitoring). The failure rate based SSALT model, introduced by Kamps \& Kateri (2015) for continuous monitored experiments, considers a general scale family of distributions and enables thus flexible modeling. It has been extended for an interval monitoring scheme that allows for more intermediate inspection points than the stress level change points by Bobotas \& Kateri (2015). The analysis of SSALT models for interval monitored, Type-I censored data and the design of such experiments are discussed, highlighting the role of the optimal allocation of the stress level change points. Statistical inference for the model parameters is considered, presenting point and interval estimation of e.g. the mean lifetime under each stress level. Maximum likelihood as well as Bayesian approaches are followed.
* The results presented are based on joined work with Panayiotis Bobotas, Udo Kamps and Christian Kohl.
Step-stress experiments under interval monitoring
Elham MOSAYEBI OMSHI, Fariba AZIZI, Soudabeh SHEMEHSAVAR, Firoozeh HAGHIGHI
Step-stress accelerated degradation test (SSADT) is a useful tool for assessing the lifetime distribution of highly reliable products when the available test items are very few. Here, we consider the SSADT plan when the degradation follows an inverse Gaussian (IG) process. Inspired by the tampered failure rate (TFR) model, we assume that a change of stress has a multiplicative effect on the mean rate of degradation. Next, under the constraint that the total experimental cost does not exceed a pre-specified budget, the optimal settings such as sample size, measurement frequency, and number of measurements at each stress level are obtained by minimizing the asymptotic variance of the estimated $p$-quantile of the lifetime distribution of the product. Finally, a simulation study is conducted. Planning of Step-Stress Accelerated Degradation Test Based on Inverse Gaussian Process
|
Estimation of Rare Events Probabilities - IMdR 1
Estimation of Rare Events Probabilities - IMdR 1 Loïc BREVAULT, Mathieu BALESDENT, Jérôme MORIO
he estimation of rare event probability by simulation usually consists in assessing probability density functions (PDF) of input variables and propagating them using an input-output simulation code (often considered as a black-box, which can be computationally expensive to evaluate). This general method is valid when collected data are sufficient enough to assess the input variable PDF with accuracy. However, when the collected data are too sparse, the full knowledge of the input variable joint PDF is not available and modeling error of the joint PDF (especially in the tail areas of the PDF) may result in critical under or overestimation of the probability of interest. In this paper, we investigate the impact of the input variable PDF modeling (more especially in the tail areas) on the rare event probability estimated by simulation techniques. For that purpose, an approach is described to singly model the center and tail areas using appropriate PDF (kernel-based and extreme value) and combining them to account for the collected data. The approach is applied to an aerospace test case dealing with launch vehicle stage fallout estimation. Rare event probability estimation in the presence of epistemic uncertainty
Inga ZUTAUTAITE, Gintautas DUNDULIS, Sigitas RIMKEVICIUS, Eugenijus USPURAS, Mohamed EID, Gintare STAKELYTE
An approach is proposed as an overall framework for the estimation of the failure probability of pipelines based on: the results of the deterministic-probabilistic structural integrity analysis. It takes into account: loads, material properties, geometry, boundary conditions, size of cracks, the corrosion rate, the number of defects and failure data. The failure data are treated using Bayesian techniques. The proposed approach firmly contributes to the probabilities estimation of rare events. A selected part of the Lithuanian natural gas transmission network is used as for a case study. Uncertainty analysis and uncertainty propagation analysis are performed, as well. Estimation of rare events probabilities in natural gas transmission networks
Roman SUEUR, Bertrand IOOSS, Thibault DELAGE
In this paper, we present perturbed law-based sensitivity indices and how to adapt them for quantile-oriented sensitivity analysis. We exhibit a simple way to compute these indices in practice using an importance sampling estimator for quantiles. Some useful asymptotic results about this estimator are also provided. Finally, we apply this method to the study of a numerical model which simulates the behaviour of a component in a hydraulic system in case of severe transient solicitations. The sensitivity analysis is used to assess the impact of epistemic uncertainties about some physical parameters on the output of the model. SENSITIVITY ANALYSIS USING PERTURBED-LAW BASED INDICES FOR QUANTILES AND APPLICATION TO AN INDUSTRIAL CASE
|
12h30-14h00 | Lunch | |||||
14h00-15h30 |
Applications of Stochastic Orders in Reliability
Applications of Stochastic Orders in Reliability Nil Kaml HAZRA, Ji Hwan CHA
In this paper, we consider series and parallel systems composed of $n$ independent items, which are drawn from a population that consists of $m$ different substocks/subpopulations. We show that in order to achieve optimal (maximal) reliability of a series system all items should be drawn from one substock, whereas for the parallel system, the optimal solution means independent drawing of $n$ items from the whole mixed population. On Optimal Grouping for Heterogeneous Items
Mithu Rani KUITI, Nil Kamal HAZRA, Asok K. NANDA
We consider the location-scale family of distributions that contains most of the popular lifetime distributions. Under certain assumptions, we show that the maximum order statistic of one set of random variables, having different/same location as well as different/same scale parameters, dominates that of another set of random variables with respect to different stochastic orders. Along with general results, we consider important specific cases. On Stochastic Comparisons of Maximum Order Statistics from the Location-Scale Family of Distributions
Maxim FINKELSTEIN, Ji Hwan CHA, Nil Kamal HAZRA
We develop a theory of stochastic orders for the age and the residual (remaining) lifetime for populations of manufactured identical items. Specifically, we show that if the random age of a population is smaller (larger) in some stochastic sense than the defined equilibrium age, then it is also smaller (larger) than the corresponding residual lifetime with respect to different stochastic orders. STOCHASTIC ORDERING FOR POPULATIONS OF MANUFACTURED ITEMS
|
Deterioration Modelling and Applications - SFdS
Deterioration Modelling and Applications - SFdS William MEEKER
Service life prediction is of great importance to manufacturers of coatings and
other polymeric materials. Photodegradation, driven primarily by ultraviolet
(UV) radiation is the primary cause of failure for organic paints and coatings, as well as many other products made from polymeric materials exposed
to sunlight. Traditional methods of service life prediction involve the use of
outdoor exposure in harsh UV environments (e.g., Florida and Arizona). Such
tests, however, require too much time (generally many years) to do an evaluation. Non-scientific attempts to simply speed up the clock result in incorrect
predictions. This paper describes the statistical methods that were developed for a scientifically-based approach to using laboratory accelerated tests
to produce timely predictions of outdoor service life. The approach involves
careful experimentation and identifying a physics/chemistry-motivated model
that will adequately describe photodegradation paths of polymeric materials.
The model incorporates the effects of explanatory variables UV spectrum, UV
intensity, temperature, and humidity. We use a nonlinear mixed-effects model
to describe the sample paths. The methods are illustrated with accelerated laboratory test data for a model epoxy coating. The validity of the methodology
is checked by extending our model to allow for dynamic covariates and comparing predictions with specimens that were exposed in an outdoor environment
where the explanatory variables are uncontrolled but recorded. DEVELOPMENT OF AN ACCELERATED TEST METHODOLOGY TO THE PREDICT SERVICE LIFE OF POLYMERIC MATERIALS SUBJECT TO OUTDOOR WEATHERING
Vincent COUALLIER, Karim CLAUDIO, Yves LE GAT, Cyril LECLERC
Within the theoretical framework of Semi-Markov Multi-State Processes, the communication presents the particular case of degradation processes with unidirectional graph and multiple absorbing states, which correspond to multiple failure modes. This work is motivated, in addition to the theoretical interest, by an industrial application to pipe leakage assessment in drinking water networks. In this context, statistical individuals are water pipes which are subjected to repeated inspections aiming at actively detect possible leaks with an acoustic device. Repeated individual inspections build up panel data which can be used to parameterize the degradation process model. The Maximum Likelihood Estimation is presented in the case of errorless condition state assessment, and a numerical application is exhibited based on real pipe description and inspection data. The feasibility of extending the method to the case when state condition assessment may be subjected to misclassification errors is finally discussed. Semi-Markov model for multi-state degradation process and parameter estimation from interval-censored panel data with measurement errors
Jinrui MA, Mitra FOULADIRAD, Antoine GRALL
Increasing excessive air/oil ratio is a slow deterioration process occurring to
wind turbine’s hydraulic blade-pitch actuator. This deterioration process
changes the dynamic character of actuator, meanwhile it can reduce the control
effectiveness and causes undesirable down time. Considering the correlation
between blade-pitch angle and turbine’s rotational speed, a state indicator
based on accumulative turbine’s rotational speed error between deteriorated
actuator condition and fault free actuator condition is proposed. This state
indicator only requires the wind turbine operational data. A Compound Poisson Process is used to model the deterioration of the blade-actuator. A wind
turbine simulator considering deteriorating blade-actuator is implemented in
Matlab/Simulink environment. The results of simulation show that the state
indicator can react the deterioration of the hydraulic blade-pitch system. BLADE-PITCH CONTROL SYSTEM DEGRADATION MODEL
|
Inference Under Censoring - ISBIS
Inference Under Censoring - ISBIS Shuvashree MONDAL, Debasis KUNDU
The progressive censoring scheme has received considerable attention in recent years. In this paper we have introduced a
balanced type-II progressive
censoring scheme for two samples. It is observed that the proposed censoring scheme is analytically more tractable than
the existing joint progressive type-II censoring scheme proposed by Rasouli and Balakrishnan. We study the
statistical inference of the unknown parameters based on the assumptions that the lifetime distribution of the experimental
units for the two samples follows exponential distribution with different scale parameters. The maximum likelihood estimators
of the unknown parameters are obtained. Based on the exact distributions of the maximum
likelihood estimators exact confidence intervals are also constructed. For comparison purposes we have used bootstrap
confidence intervals also. It is observed that the bootstrap confidence intervals work very well and they are very easy to implement in practice. A Balanced Two Sample Type-II Progressive Censoring Scheme
Ayon GANGULY, Debasis KUNDU
In this article, we consider a simple step stress life test in the presence of competing risks, which has been modelled using Cox’s latent times. It is assumed that the stress level is changed as soon as certain number of failures are observed in the first stress level. It is also assumed that the data are Type-II censored. The latent failure times at each stress levels are assumed to have an exponential distribution. We obtained maximum likelihood estimators of the scale parameters under the assumption of cumulative exposure model. The exact conditional distribution of the maximum likelihood estimators of the parameters are obtained and then they are used to construct confidence intervals of the unknown parameters. The optimality of the step stress life test is addressed. An extensive simulation is performed to judge the performance of the method proposed. Finally a data set is analyzed for illustrative purpose. ANALYSIS OF SIMPLE STEP-STRESS MODEL IN PRESENCE OF COMPETING RISKS
Fatih KıZıLASLAN
In this study, the reliability properties of two-component parallel and series systems are considered for bivariate generalized exponential distribution introduced by Kundu and Gupta [3]. For this model, the moments and mean residual life functions of these systems and the regression mean residual life function are derived. Some stochastic ordering results of series and parallel systems are also obtained. SOME RELIABILITY CHARACTERISTICS OF BIVARIATE GENERALIZED EXPONENTIAL DISTRIBUTION
|
System Reliability Estimation and Evaluation
System Reliability Estimation and Evaluation Watalu YAMAMOTO, Lu JIN
Online monitoring data contains various measurements of the activity of the system activity. The amount of work is also measured in various ways. When we model the reliability of a system, i.e., the intensity or the risk of failure events, we need to choose a time scale. Though there should be genuine time scales for each failure phenomenon, the field data, including online monitoring data, may not be able to provide evidence for them. There are many uncontrollable factors in the field. Cumulative exposure models are flexible in taking the effects of dynamic covariates on lifetime scale into account. This article proposes to plan a Policy I maintenance schedule by introducing approximations of log-linear cumulative exposure models with the joint moment generating function of the underlying covariate process. Approximate Log-Linear Cumulative Exposure Models
Zhaohui LI, Qingpei HU, Dan YU
This article constructs a normal approximation approach to determine the lower confidence limits for system reliability with time-to-failure components data. The proposed approach has a higher convergence order than the central limits theory, which is a most widely used theorem in stochastic statistical methods. In this article, we use polynomial adjustment inspired by Winterbottom(1980) to construct the higher order statistics. A new form of expansion for Weibull model is provided, as a demonstration on dealing with multi-parameter model while the estimators of parameters are dependence. Some examples are given to illustrate the efficiency of the proposed approach. An High Order Approximate Method for System Reliability Assessment
Joni DRIESSEN, Hao PENG, Geert-Jan VAN HOUTUM
In this research, we study a single-component system that is characterized by three distinct deterioration states, cf. the Delay Time Model: normal, defective, and failed. The system is maintained by an inspection policy with fixed time intervals, and preventive system maintenance after a given number of inspections. The inspections are imperfect, and the probability of an inspection error changes over the system's operation time. Our objective is to minimize the average cost over an infinite time horizon. We present exact cost evaluations for a given maintenance policy, and we compare our model with non-constant probabilities to a model that considers constant probabilities of inspection errors. Our computational study illustrates that the model with constant probabilities may yield, on average, 19\% higher costs than the model using non-constant probabilities of inspection errors. These values depend on the chosen parameter values, but still give an indication of how large the difference between both models
can be. Finally, we also present an extension in which a reliability constraint (in terms of average failures per time unit) is added to our problem. Extended abstract of maintenance optimization under non-constant probabilities of imperfect inspections
|
Signatures - 1
Signatures - 1 Serkan ERYILMAZ
The survival signature has been found to be useful to study binary coherent systems that consist of multiple types of components. In this paper, the concept of survival signature is used to study a certain class of unrepairable multi-state systems consisting of multiple types of multi-state components. Extension of the results to multi-state systems consisting of exchangeable dependent multi-state components is also presented. The survival signature for a class of unrepairable multi-state systems
Fabio L. SPIZZICHINO
For coherent binary systems made with n binary components, we point out some special properties of the notions of Reliability, Signature, and Relative Quality Functions, under the assumptions that the joint probability distribution of components' lifetimes is described by a time-homogeneous, load-sharing model. Such models are characterized in terms of the so-called multivariate conditional hazard rate functions. Within such framework we also study aspects connected to conditional signatures and to decomposability of the system. The talk is related with some joint work with J-L. Marichal, P. Mathonet , and G. Nappo.
Reliability, Signature, and Relative Quality Functions of Systems under Time-Homogeneous Load-Sharing Models
Jean-Luc MARICHAL, Pierre MATHONET, Jorge NAVARRO, Christian PAROISSIN
The structure signature of a system made up of $n$ components having continuous and i.i.d.\ lifetimes was defined in the eighties by Samaniego as the $n$-tuple whose $k$-th coordinate is the probability that the $k$-th component failure causes the system to fail. More recently, a bivariate version of this concept was considered as follows. The joint structure signature of a pair of systems built on a common set of components having continuous and i.i.d.\ lifetimes is a square matrix of order $n$ whose $(k,l)$-entry is the probability that the $k$-th failure causes the first system to fail and the $l$-th failure causes the second system to fail. This concept was successfully used to derive a signature-based decomposition of the joint reliability of the two systems. In this talk we will show an explicit formula to compute the joint structure signature of two or more systems and extend this formula to the general non-i.i.d.\ case, assuming only that the distribution of the component lifetimes has no ties. Then we will discuss a condition on this distribution for the joint reliability of the systems to have a signature-based decomposition. Finally we will show how these results can be applied to the investigation of the reliability and signature of multistate systems made up of two-state components. This talk is based on the research paper by the authors. Probability signatures of multistate systems made up of two-state components
|
Recent Advances in Applied Statistical Methods
Recent Advances in Applied Statistical Methods Regina LIU
Tolerance intervals and tolerance regions are important tools for statistical quality control and process monitoring of univariate and multivariate data, respectively. This paper discusses the generalization of tolerance intervals/regions to tolerance tubes in the infinite dimensional setting for functional data. In addition to the generalizations of the commonly accepted definitions of the tolerance level of beta-content or beta-expectation, we introduce the new notion of alpha-exempt beta-expectation tolerance tube, which loosens the definition of beta-expectation tolerance tube by allowing alpha portion(usually pre-set by domain experts) of each functional be exempt from the requirement. More specifically, an alpha-exempt beta-expectation tolerance tube of a sample of n functional data is expected to contain [n x beta] functionals in such a way that at least (1-alpha)x100% portion of each functional is contained within the boundary of the tube. Those proposed tolerance tubes are completely nonparametric and thus broadly applicable. We investigate their theoretical justification and properties. We also show that the alpha-exempt beta-expectation tolerance tube is particularly useful in the setting where occasional short term aberrations of the functional data are deemed acceptable if those aberrations do not cause substantive deviation of the norm. This desirable property is elaborated and
illustrated further with both simulations and real applications in continuous
monitoring of blood glucose level in diabetes patients as well as of aviation risk
pattern during aircraft landing operations. NONPARAMETRIC TOLERANCE TUBES FOR FUNCTIONAL DATA
Gilles WAINRIB
Cancer has been gradually shown to be very complex a disease. Patients often undergo many relapses with very unique pathways. Cures involve vast amounts of information to decide on a protocol, among which the patient’s data is crucial. In this article, we use machine learning to understand and represent patients through a similarity metric based on their medical data. Furthermore, building on top of the latter metric, we attempt to predict cancer relapse. Such prediction could be interesting medical information while also being a response prediction.
Since the early 2000s, the Institut Curie has been collecting clinical data covering a large population of tens of thousands of patients. For each patient, the different chemotherapies are described in structured text format. Additionally, information about surgeries, radiotherapies, adverse events, medical visits and free text reports are available in unstructured text format.
Recent breakthroughs in the field of natural langage understanding, particularly with deep learning approaches, have started to enable machine learning algorithms to make sense of text data. In this respect, relapse prediction can be considered as a validation of the similarity metric. First results show that although a hard task, the algorithms were able to predict relapse to a certain extent. Patient similarity and relapse prediction from unstructured text data using natural langage processing and machine learning
Candemir CIGSAR, Jerry LAWLESS
Reliability data analysis of repairable systems typically involves recurrent events of different types. An important aspect of recurrent event processes that may result in clustering of events is the presence of carryover effects following events. This phenomenon can occur, for example, when repairs to address a failure in a hardware or software system may not fully resolve the problem, or may even introduce new problems. Carryover effects provide a natural framework for assessing imperfect repairs, and in this paper, we consider tests for such effects in repairable systems settings. The tests are illustrated on an analysis of photocopier failures. Assessment of the Effects of Imperfect Repairs in Repairable Systems
|
15h30-16h00 | Coffee break | |||||
16h00-17h00 |
Signatures - 1
Signatures - 1 Marco BURKSCHAT, Tomasz RYCHLIK
We consider coherent and mixed systems with components whose exchangeable lifetime distributions come from the failure dependent proportional hazard model.
This means that consecutive failures satisfy the assumptions of the generalized order statistics model.
For fixed system and failure rate proportion jumps, we provide sharp bounds on the deviations of system lifetime distribution quantiles from the respective quantiles of single component nominal and actual lifetime distributions.
The bounds are expressed in the scale units generated by the absolute moments
of various orders of the component lifetime centered about the median of its distribution. Optimal evaluations of quantiles of system lifetime distributions coming from failure dependent proportional hazard model
Somayeh ASHRAFI
In this paper, we consider a three-state system consisting of n binary components of two different types. We assume that component lifetimes of the same
type are exchangeable and component lifetimes of two different types are independent. A mixture representation is obtained for the joint reliability function
of the state lifetimes of the system. For this purpose, we generalize the concept of survival signature to the three-state systems and call it bivariate survival
signature. The bivariate survival signature is computed for several systems composed of two different types of independent modules. Three-state systems with two types of independent components
Somayeh ZAREZADEH
This paper is concerned with measuring the dependency between the lifetimes
of a two-state network and its components using the mutual information. We
define a variant of the signature of the network called semi signature, which
depends only on the network structure. The proposed measure association
does not depend on the component (and hence the network) lifetimes and
depends only on the the structure of the network. Some examples are given
for illustration of the results. On the dependency between the lifetimes of a network and its components
|
Degradation modelling and analysis - 1
Degradation modelling and analysis - 1 Jianyu XU, Min XIE
In this paper, we propose a new perspective for the concept of threshold in degradation based reliability model. In conventional work of degradation, scholars always tend to establish a threshold for degradation performance to model a non-traumatic failure mode, i.e., a fixed value that system is supposed to fail when certain degradation performance exceeds this value. However, in many recent cases, an instant failure based on a fixed threshold is sometimes arbitrary and impractical. We introduce a logistic model in this paper to entitle more flexibility and convenience for degradation based models. We will show that the traditional threshold based model can be considered as a special case of our model. A LOGISTIC PERSPECTIVE FOR THRESHOLD OF DEGRADATION-FRAILURE MODEL
Chien-Yu PENG, Ya-Shan CHENG
Accelerated degradation tests (ADTs) are widely used to assess the lifetime information for highly reliable products. One restrictive assumption with a conventional ADT model is specifying which parameter depends on explanatory variables in advance. The assumption can lead to misuse of physical/chemical mechanisms and unreasonable extrapolation of the product's lifetime at the normal-use conditions. This study proposes a two-stage approach (named threshold degradation) as an alternative model with regression structures that accommodate explanatory variables. A real example is performed to show the differences between the conventional ADT model and the threshold degradation model and to demonstrate the advantages of the latter. THRESHOLD DEGRADATION
Lanqing HONG, Zhisheng YE, Ran LING
Emerging contaminants (ECs) have been identified as potential hazards to the environment and public health. One research focus area of ECs lies in their elimination of water treatment. It is of interest to investigate the degradation rate of an EC under different treatment conditions. Existing EC degradation models usually neglected the time-varying volatility of the degradation path over time. In addition, common parametric link functions might not be sufficiently flexible to well capture the complex relationship between the treatment conditions and the EC degradation rate. In this paper, we apply the inverse Gaussian (IG) process to model the degradation path of an EC. The widely used Gaussian process is adopted to incorporate the treatment conditions into the degradation process. Point and interval prediction of the EC degradation rate under different treatment conditions are provided. Performance of the semi-parametric degradation model is verified using simulation study and practical degradation data. Semi-parametric Degradation Model for Emerging Contaminants Based on Gaussian Process
|
Lifetime Data Analysis - 1
Lifetime Data Analysis - 1 E. P. SREEDEVI, P.g. SANKARAN, Isha DEWAN
In survival studies, current status censoring occurs when each individual in the study is observed only once at a random monitoring time and the information whether the event of interest has happened or not before the monitoring time is only available. Competing risks data with current status censoring frequently arise from cross sectional studies in demography, epidemiology and reliability studies when objects are exposed to multiple risks of failure. In the present paper, we propose a semi parametric regression model based on sub survival functions for the analysis of current status competing risks data. The asymptotic properties of the estimators are discussed. Estimation Procedures for Current Status Competing Risks Data
Sudheesh KATTUMANNIL, S ANJANA
In this paper, we develop a simple non-parametric test for testing the independence of time to failure and cause of failure in competing risks setup. Asymptotic properties of the test statistic are studied. We also discuss the testing procedure when the data are right censored. The performance of the proposed test is studied through simulations. TEST FOR INDEPENDENCE BETWEEN TIME TO FAILURE AND CAUSE OF FAILURE WITH K CAUSES
Jinyong YAO, Zhiping PANG
In engineering practice, the life indicators of aircraft products are customary given by flight hours, landing times and calendar life. Among the three indicators, treating the first which reaches to the upper bound as the Aircraft service life. Calendar life is the important reference indicators of the aircraft stereotypes, maintenance and retirement. Therefore, it is necessary to give a scientific and reasonable indicator of the calendar life of aircraft products. Aiming at the problem of the definition, evaluation principle and method of the calendar life of airborne products, the problem of calendar life of airborne products is studied including the following three aspects: First of all, based on the research of the calendar life evaluation system of organic products, Summed up a calendar life evaluation process of aircraft products. Second, the sources of uncertainty in calendar life assessment are divided into stochastic uncertainties of parameters and cognitive uncertainty of models. Through parameter sensitivity analysis, the influence degree of each parameter on the uncertainty of calendar life evaluation is determined. Finally, analysis on the uncertainty of calendar life of a certain type of brake master wheel is given. Research of Calendar Life Assessment Framwork for Airborne Product
|
Multi-State system reliability - 1
Multi-State system reliability - 1 Wei-Chang YEH, Chia-Ling HUANG
System reliability that is a major index to value the quality of system in many industries such as computing centers, radars, many things in telecommunications, electricity generation and transmission, and aerospace. Therefore, the system reliability enhancement and evaluating plays an important role in modern. Reliability Redundancy Allocation Problems (RRAP) is a very famous technique to increase the system reliability. In RRAP study, most of these use the active strategy and seldom of these adopt the cold-standby strategy for the redundancy components in the subsystem. However, the cold-standby strategy in RRAP is an efficient method to increase the system reliability. Hence, a RRAP using the cold-standby strategy is studied using a famous algorithm called Simplified Swarm Optimization (SSO) in this research. Meanwhile, three multi-state systems including series system, series-parallel system, and complex (bridge) system are presented to evaluating the system reliability of RRAP with the cold-standby strategy using SSO algorithm. Multi-state System Reliability Evaluating of Reliability Redundancy Allocation Problems using Simplified Swarm Optimization
Xiang-Yu LI, Yan-Feng LI, Hong-Zhong HUANG
PMS is a kind of special system in reliability modeling and has wide applications in engineering practices, especially in aerospace industry such as man-made satellite or spacecraft. The whole lifetime of the PMS can be divided into several phases according to different mission demands and system structure. On the other hand, to achieve high reliability performance, certain critical parts in the system are designed in redundancy architecture, such as cold standby, hot standby and warm standby. State-space models such as Markov Processes have been widely used in previous studies to evaluate their reliabilities. But the lifetime of most mechanical products follows non-exponential distributions like the Weibull distributions while electrical components follow exponential distribution at the same time. So the Markov Process cannot be used in this kind of this system while the SMP (Semi-Markov Process) is available. So in this paper, the SMP as well as an approximation algorithm is proposed to assess the reliability of the PMSs consisting of both non-exponential and exponential components. Furthermore, the accuracy of the approximation algorithm is explored and the reliability assessment of a multi-phased AOCS (attitude and orbit control system) within a man-made satellite is presented to demonstrate the proposed approximation method. Reliability Analysis of Complex Phased Mission System Using Semi-Markov Process
Houbao XU, Mei LI
This paper introduced a kind of explosive logic network. By describing the possible states that the explosive logic network may enter during the service processing, this paper regarded the explosive logic network as a multi-state system and formulated such system with a group of ODEs. By solving these ODEs, we derived the steady probability of the system in each state, and also derived the reliability of the system at target time $t$ with exponential distribution and Weibull distribution respectively. Reliability Analysis of An Explosive Logic Network with multi-state
|
Imperfect Repair Modeling
Imperfect Repair Modeling Gustavo GILARDONI, Enrico COLOSIMO, Maria Luíza TOLEDO, Marta FREITAS
In statistical models for repairable systems, the effect of repairs after failures
may be assumed as minimal or imperfect. However, no test is available to
decide, based on data, which of these assumptions is true for a system. In this
paper, it is proposed a general statistical test procedure in order to test the
basic hypothesis minimal vs imperfect repair. The test is non-parametric and
based on the binomial distribution. Empirical studies for the test are presented,
and shows that, under many scenarios, it has a good performance in terms of
the type I error rate. An application with real data involving failures in trucks
from a mining company is also presented. Non-parametric test for imperfect repair
Annamraju SYAMSUNDAR, Laurent DOYEN, Olivier GAUDOIN
At MMR 2015 [5], we presented a first attempt to develop imperfect maintenance models based on Geometric Reduction of Age (GRA) and Geometric Reduction of Intensity (GRI). The development of these models and other geometric reduction models are presented here. Their behavior is illustrated based on simulated data and they are applied to sets of maintenance data obtained from the field. GRA and GRI Imperfect Maintenance Models – New Results
Laurent DOYEN, Remy DROUILHET
The basic assumptions on maintenance efficiency are known as perfect maintenance or As Good As New (the system is renewed) and minimal maintenance or As Bad As Old (maintenance has no effect on future failures occurrences). Obviously reality falls between this two extreme cases. An intermediate maintenance effect can be described thanks to imperfect maintenance models. This paper presents an introductory tutorial for our VAM software. VAM, for Virtual Age Model, is an R open source package that implements the principal imperfect maintenance models. VAM usage is based on a formula which specify the characteristic of the data set to analyze and the model used for that. Thanks to this formula description the package becomes adaptive. In fact, the formula is defined by the user and characterizes the behavior of the new unmaintained system, the types and effects of the preventive and corrective maintenances, and how preventive maintenance times are planned. Then, the package functionalities enable to simulate new data sets, to estimate with maximum likelihood method the parameters of the model, to calculate and plot different indicators. VAM, an R package for maintenance and aging models
|
Computational Methods in Reliability
Computational Methods in Reliability Phillip MCNELLES, Lixuan LU
In recent years, there has been increased use of digital Instrumentation and Control (I\&C) systems in Nuclear Power Plants (NPP). In the context of NPPs, digital systems come with their own set of issues with regards to the safety analysis. Digital systems will include some form of software, which introduces the issue of time dependency, due to features like feedback/feedforward control loops. Traditional safety analysis methodologies were developed before digital systems became common, and do not explicitly model time dependent properties. Therefore, dynamic (time dependent) analysis methodologies have the potential for the improved modelling of digital systems. This paper seeks to compare the results of two software tools; Dymonda and YADRAT, for implementing the Dynamic Flowgraph Methodology (DFM). COMPARISON OF TWO IMPLEMENTATIONS OF DYNAMIC FLOWGRAPH METHODOLOGY
Hassane CRAIBI, Anne DUFTOY, Thomas GALTIER, Josselin GARNIER
The reliability assessment of complex power generation systems generally relies on simulation methods. When the system failure is a rare event, the MC methods are computationally intensive and we need to use a variance reduction method to accelerate the reliability assessment of such system. Among variance reduction methods, one may think of particles filters methods such as the interacting particle system method (IPS). The interest of these methods is that they do not require much knowledge about system failure to be applied, and therefore they are well suited to industrial applications. Power generation systems often follow deterministic dynamics which are altered by components' failures, components' repairs, and automatic control mechanisms. We model such dynamic hybrid systems using piecewise deterministic Markovian processes. When simulated on a short period of time, such processes tend to often generate the same deterministic trajectory, thus limiting the efficiency of the IPS method for which it is preferable to generate many different trajectories on short intervals of time. To reduce this phenomenon, we propose an adaptation of the IPS method based on the memorization method: conditioning the generated trajectories to avoid the most probable ones while computing exactly the influence of the most probable trajectories. The Interacting particles system method adapted to piecewise deterministic processes
Margaux DUROEULX, Nicolae BRINZEI, Marie DUFLOT, Stephan MERZ
Estimates of system reliability crucially rely on qualitative techniques for determining the impact of component failures. Formally, the structure function of a system determines minimal tie or cut sets that are instrumental for quantitative techniques of reliability assessment. This paper describes three techniques, based on Boolean satisfiability solving, for computing minimal tie sets. Satisfiability techniques for computing minimal tie sets in reliability assessment
|
17h10-18h30 |
Structural reliability
Structural reliability Emmanuel ARDILLON, Philippe BRYLA, Antoine DUMAS, Mourad EL MOUSAOUY
Diagnoses are carried out by EDF for justifying the mechanical integrity of in-service penstock pipes thinned by corrosion. They involve the calculation of a Margin Factor requiring deterministic values for ultimate tensile strength $Rmd$ and thinning $\Delta_{ed}$, corresponding to quantiles of $Rm$ and $\Delta_e$. They are usually taken at two standard deviations ($\gamma= 2$), considering uncertainties and natural dispersion on these variables. In order to optimize these standard deviation multipliers gamma for guaranteeing a given target reliability with regard to the risk of plastic collapse, a semi-probabilistic approach has been developed. This approach led to implement a structural reliability model for the evaluation of the annual failure probability of a thinned penstock. Due to a specific formulation of the limit state function, it was possible to cover most of penstock configurations. The numerical evaluation of the annual failure probability Pf was optimized by considering a single event. Finally, this study has shown that for most usual grades of steels and for usual nominal thickness range, it is possible to reduce the default multiplier $\gamma=2$ usually applied. For a majority of configurations, values of gamma from 0 to $1,75$ are compatible with an annual failure probability Pftarget = 10-7 for elementary pipes. OPTIMIZING QUANTILES FOR DIAGNOSES OF HYDROPOWER PENSTOCK PIPES BY A STRUCTURAL RELIABILTY APPROACH
Chang YIN, Wei DAI, Yikun CAI, Yubing HUANG
Surface modification technology could help improve the matching degree between natural environment and product. Environmental worthiness of process plays a huge influence on the reliability of product. The current assessment of surface modification technique mainly depends on the cost of production, energy consumption and the product quality but ignores the environmental effect on the surface of product. Therefore, this paper presents an evaluation method of surface modification process based on environmental effect. Initially, the relationship between failure modes, failure mechanisms and environment stresses are discussed to determine critical surface integrity characteristics and sensitive environment stresses. Environmental effect model is developed using the degradation data of surface integrity characteristics and environment data based on artificial neural network. Then the degradation of surface integrity characteristics in dynamic environment could be predicted by the environmental effect model. Finally, evaluation index is defined and calculated with Stress-Strength model. The proposed approach is illustrated with an example application. The results show that the presented method is capable of assessing the adaptability of coating technology accurately in the natural marine environment. Reliability evaluation of surface modification process based on environmental effect
El-Mahdi BOUHJITI, Julien BAROTH, Frédéric DUFOUR, Benoit MASSON
This paper proposes a reliability analysis strategy based on thermo-hydro-mechanical-leakage (THM-L) finite elements (FE) modeling of large reinforced concrete structures with a containment role. Applied to massive structures such as dams or nuclear containment buildings, the coupled THM-L models require a considerable computational time and the identification of a large number of parameters. Moreover, those input parameters show a spatio-temporal variability which affects the concrete’s behavior. This work suggests an adapted reliability analysis for such complex modeling and large number of inputs. After giving a brief description of the used model to describe the chained THM-L behavior of concrete, the general reliability analysis strategy is detailed. The most influent parameters are selected according to a first order variance based sensitivity analysis. The probabilistic coupling with the FE model is then performed for a reduced number of parameters. The spatial variation of the concrete’s properties is modeled using discretized random fields. Finally, the cumulative distribution functions of the considered variables of interest (the peak temperature at early age or the air leakage rate for example) are sought using a collocation approach. Towards the reliability analysis of large concrete structures behaviorusing finite element models
Yubing HUANG, Wei DAI, Yuqing ZHANG, Yu ZHAO
As the important link of aerospace equipment part, bolted flange joint homogeneity is primary guarantee of equipment reliability. Currently, the standard for the process reliability of the flange bolts is not explicit, which means that the loading homogeneity of the bolts cannot be ensured. Based on the modified Ant Colony Optimization (ACO) path planning, a tightened path of bolts model is proposed for optimizing the path of loading in order to ensuring the homogeneity of bolted flange after loading. According to fitting the function expression based on tightened variation improve the ACO algorithm. The function predicts the tension changes in tightened bolts due to the subsequent tightening of other bolts in the joint. An experimental setup and finite element analysis were developed to verify the numerical results produced by the tightened path of bolts model. Analytical and experimental results were presented and discussed RESEARCH ON PROCESS RELIABILITY OF BOLTED FLANGE JOINT BASED ON AN MODIFIED ACO ALGORITHM
|
Bayesian methods in reliability
Bayesian methods in reliability Huu-Du NGUYEN, Evans GOUNO
We consider repairable systems with identical components. Components can fail simultaneously due to external shocks that can be for example, human interventions, extreme environments or hardware failures outside the system. These simultaneous failures are called common-cause failure (CCF). CCF cannot be always identified. The data are confounded.
We propose a Bayesian approach to estimate the rate of the different types of failures in this situation of incomplete data. The efficiency of the method is illustrated through simulated data. Bayesian estimation for a common-cause failure model
Yunayan XING, Ping JIANG, Qun WANG, Yunfei GUAN
The development process of product usually composes of multiple test stages, and the product reliability is improved along with the advance of test stage and modification of product design. In this paper, a Bayesian sequential testing method of dynamic parameter is proposed to judge whether product reliability level meet the development requirement or not and when the development process can be terminated. Firstly, the determination model of prior distribution of dynamic parameter is presented by applying non-informative prior distribution and conjugate prior distribution based on multiple stages test data and the ordered constraint conditions of growth trend of product reliability. Then the Bayesian sequential testing model is constructed to decide whether product reliability satisfies the predefined requirement or not. Finally, a numerical example demonstrates the procedure of statistical decision analysis. Bayesian sequential testing method for dynamic parameter of binomial distribuiton
Paolo MASON
A model of crack initiation and residual component life is fitted to the inspection history of a set of gas circulator impellers at two UK power stations. The model is then used to estimate the probability of future in-service failure of each item in scenarios in which the next opportunity for inspection (i.e. detection of a developing crack) is exploited or forgone.
The present framework, which makes decision possible on an item-by-item basis as to whether inspection is warranted (once in-service failure and inspection are costed), is applicable to any set of components that are periodically inspected with a standardized methodology and are retired from service as soon as they are found to bear a crack. The study takes into account in exact manner both variability, i.e. the stochastic character of the future behaviour of the components, and uncertainty, i.e. the stochastic character of their past behaviour on which the inference of the model parameters is based.
A Bayesian analysis of component life expectancy and its implications on the inspection schedule
Yuqing HU, Xiaoyang LI, Rui KANG
Reliability acceptance sampling plans (RASPs) are used to determine the acceptability of a batch of the product. When RASPs are designed with the traditional method, the RASPs for each batch and type of the product are the same and it is bound to time and cost consuming for enterprises. To slove this issue, a Bayesian RASP design method based on Bayesian belief network(BBN) is proposed. The BBN is utilized to quantify the condition of a specified batch of products. A baseline batch of the products is defined as the basis. A correction factor is proposed to measure the difference for a specified batch compared with the baseline batch, and then the correction factor will be induced to update the prior distribution to modify the RASP for any batch of the product in the Bayesian test design. The effectiveness and convincingness of the proposed method is illustrated by the printed wiring board of electronic products. Bayesian Reliability Acceptance Sampling Plans Design Based On Bayesian Belief Networks
|
k-out-of-n systems - 1
k-out-of-n systems - 1 Funda ISCIOGLU
Dynamic reliability analysis of multi-state k-out-of-n:G systems have been widely studied in case of homogeneous continuous time Markov process assumption in the literature. In this study, we evaluate dynamic performance of multi-state k-out-of-n:G systems under non-homogeneous continuous time Markov process degradation by using lifetimes in terms of order statistics. By means of this degradation process assumption we capture the effect of age on the state change of components in the analysis by means of time dependent transition rates between states of the components. Essentially this is typical of many systems and more practical to use in real life applications. The lifetime properties of two different multi-state k-out-of-n:G systems under NHCTMP assumption have been studied when the components are independent and identical. Numerical results for the performance characteristics of those systems are provided and supported with some graphical illustrations. DYNAMIC RELIABILITY ANALYSIS OF MULTI-STATE K-OUT-OF-N:G SYSTEMS BY MEANS OF ORDER STATISTICS
Milia HABIB, Farouk YALAOUI, Hicham CHEHADE, Nazir CHEBBO, Iman JARKASS
Dependency is essential for system reliability/availability optimization. But, it is often neglected in reliability studies due to its complexity. Thus, we have two interests in this paper: dependency modeling and availability optimization. We propose a redundant dependency model for the k-out-of-n:G system describing the failure dependency between the components and the redistribution of the system loading. Then, we present an optimization approach to minimize the system cost under the availability constraint while considering the redundant dependency. For the resolution, we use in addition to the solver LINGO, the genetic algorithm due to its rapid search capability and its flexibility in representing mixed design variables. A numerical application is done. The obtained results show that our study can help to build efficient economic systems through reinforcing the performance from inside the system. The best design parameters are obtained in a reasonable computational time. Availability analysis and redundant dependency modeling of k-out-of-n:G system
Cihangir KAN
This paper studies a generalized version of m-consecutive-k-out-of-n:F system that is named as m-consecutive-k,l-out-of-n:F system. This system consists of n linearly (circularly) ordered components such that fails if and only if there are at least m l-overlapping runs of k consecutive failed components. The parameter l is a
leverage in this system which provides that the reliability of this system is
bounded by overlapping ($0 Reliability and Joint Reliability Importance in a m-consecutive-k,l-out-of-n:F System
Hanlin LIU
In system design process, standby redundancy is a widely used technique to improve system reliability and availability. In this paper, we investigate the repairable K-out-of-N system with both warm and cold standby redundancy. In the proposed system, each component can be in failure, cold, warm and active states and the components are assumed to be repairable. The systems are modeled by continuous time Markov chain (CTMC) and the system long run availability is derived. Furthermore, optimal number of warm standby is studied by considering system availability and average long run cost. Illustrative examples are also given to show the applications of the proposed model. Redundancy allocation of mixed warm and cold standby components in repairable K-out-of-N systems
|
Stochastic processes in reliability - 1
Stochastic processes in reliability - 1 Dmitrii SILVESTROV, Sergei SILVESTROV
New algorithms for computing asymptotic expansions for moments of hitting times for nonlinearly perturbed semi-Markov processes are presented. The algorithms are based on special techniques of sequential phase space reduction and some kind of operational calculus for Laurent asymptotic expansions applied to moments of hitting times for perturbed semi-Markov processes. These algorithms have a universal character. They can be applied to nonlinearly perturbed semi-Markov processes with an arbitrary asymptotic communicative structure of the phase space. The algorithms are computationally effective, due to a recurrent character of the corresponding computational procedures. Applications to asymptotic analysis of reliability, queuing, information and other models of perturbed stochastic systems are discussed. Asymptotic Expansions for Moments of Hitting Times for Nonlinearly Perturbed Semi-Markov Processes
Irene VOTSI, Mohamed HAMDAOUI
Semi-Markov models are state-of-the-art stochastic models that enable a flexible description of complex systems. Here we focus on reliability indicators for discrete time and discrete state space semi-Markov models. Following previous results on the discrete time rate of occurrence of failure, we aim at providing reliability indicators that quantify the contribution of each not operational state to the failure of a random, repairable system modeled by a semi-Markov model. To achieve our goals, we introduce the conditional hitting time intensity and study statistical estimation aspects by employing counting processes. We further provide numerical examples based on simulated data. This is the first attempt to arise sensitivity issues of reliability indicators for semi-Markov models. An application in reliability of mechanical systems subjected to earthquake loads is considered for future work. Conditional Hitting Time Intensity for Semi-Markov Models
Galina ZVERKINA
A computable estimate of the readiness coefficient for a standard binary-state system is established in the case where both working and repair time distributions have exponential moments. Exponential bounds of convergence for availability factor
Houbao XU
This paper investigates the dynamical solution of a queueing system with two kinds of breakdown states. By formulating the queueing system with abstract Cauchy Problem $\mbox{d}g(\cdot, t)/\mbox{d}t=(A+U+E)g(\cdot,t)$, the paper shows
that the system operator generates a positive strong continuous semigroup $T(t)$ of contraction. The unique positive
time-dependent solution of the system can be expressed as $g(\cdot, t)=T(t)g_0$, here $g_0$ is initial condition of the system.
The steady availability of the queueing system is also presented at the end of the paper. Reliability Analysis of A Queueing System with Repairable Service Station
|
Lifetime Distributions Theory
Lifetime Distributions Theory Ali Riza BOZBULUT, Serkan ERYILMAZ
Two new generalizations of the classical extreme shock model are defined and studied. In the classical extreme shock model, the system fails due to a single catastrophic shock. In the new general models, the system is subject to shocks that may arrive from one of m possible sources. Both models reduce to the classical extreme shock model when m = 1. Survival functions and mean time to failure values of the system under new models are obtained assuming phase-type distribution for times between successive shocks. Generalized Extreme Shock Models
Juan C. LARIA DE LA CRUZ, Josué M. CORUJO RODRíGUEZ, José E. VALDES CASTRO
It is considered a cold standby repairable system composed of n components, from which n-1 are spares, and a repairing unit. It is supposed that when a component fails, it is immediately sent to the repairing unit. After reparation concludes it is placed back as main component or spare. All reparations are perfect.
There is considered a second type of components, which are also clients in the repairing unit. Lifetimes and reparation times of components are assumed to be random variables, all mutually independent. The asymptotic behavior of the lifetime of the system is investigated. A Study on the Reliability of A Cold Standby System with Two Types of Reparation
Boyan DIMITROV, Sahib ESA
We follow the ideas of measuring strength of dependence between random events, presented at the spring two previous MMR conferences in South Africa and Tokyo. In our work here we apply it for analyzing local dependence structure of some popular bivariate distributions. At the conference presentation we focus on the Bivariate Normal distributions with various correlation coefficients, and on the Marshal-Olkin distribution with various parameter’s combinations. We draw the surface $z = g_i(x,y)$, $i=1,2$ of dependence of i-th component on the other component $j\neq i$ within the squares $[x, x+1]\times [y,y+1]$, and $[x,x +.5]\times [y,y+.5]$. The points $(x,y)$ run within the square $[-3.5, 3.5]\times [-3.5, 3.5]$ for Bivariate Normal distribution, and in $[0.10] \times [0,10]$ for the Marshal-Olkin distribution. INTERVAL DEPENDENCE STRUCTURES OF TWO BIVARIATE DISTRIBUTIONS IN RISK AND RELIABILITY
Josué M. CORUJO RODRíGUEZ, José E. VALDES CASTRO, Juan C. LARIA DE LA CRUZ
We consider two models of two-unit reparable systems: cold standby system and warm standby system. We suppose that the lifetimes and repair times of the units are all independent exponentially distributed random variables. Using stochastic orders we compare the lifetimes of systems under different assumptions on the parameters of exponential distributions. STOCHASTIC COMPARISONS OF TWO-UNITS MARKOVIAN REPARABLE SYSTEMS
|
Maintenance policies
Maintenance policies Shey-Huey SHEU, Tzu-Hsin LIU, Zhe-George ZHANG, Hsin-Nan TSAI
This article studies a two-unit system with failure interactions. The system is subject to two types of shocks (I and II). A type I shock, removed by a minimal repair, causes a minor failure of unit A and type II shock causes a complete system failure that calls for a corrective replacement. Each unit A minor failure also results in a random amount of damage to unit B and such a damage to unit B can be accumulated to trigger a preventive replacement or a corrective replacement action. Besides, unit B with cumulative damage of level z may become minor failed with probability $\pi(Z)$ at each unit A minor failure and fixed by a minimal repair. This paper proposes a more general replacement policy which prescribes that a system is preventively replaced at the Nth type I shock, or the time which the total damage to unit B exceeds a pre-specified level Z (but less than the failure level K where K>Z) or is replaced correctively either at the first type II shock or when the total damage to unit B exceeds a failure level K, whichever occurs first. OPTIMAL REPLACEMENT POLICY BASED ON CUMULATIVE DAMAGE FOR A TWO-UNIT SYSTEM
Gustavo L. GILARDONI, Cátia R. GONçALVES
In a seminal paper published in 1960, Barlow and Hunter considered a repairable system subject to minimal repairs and found the deterministic optimal maintenance policy which minimizes the expected cost per unit of time. Here it is proposed a random policy which maintains the system whenever the failure intensity exceeds the observed cost per unit of time. When the system is deteriorating over time, it is shown that this random policy has lower expected cost than the periodic one for a general repair model. Moreover, under the minimal repair assumption and assuming a Power Law Process intensity, the exact distribution of the random time and associated cost is shown to be related to the generalized Poisson distribution. A random maintenance policy for a repairable system
Li YANG, Yu ZHAO, Xiaobing MA
This study proposes a condition-based maintenance policy for a single-unit system subject to gradual degradation. The failure process of the system is divided into three states, namely, normal, defective and failed, and a defective state incurs a greater degradation rate than a normal state. Periodic inspections are performed to measure the state and the degradation level of the system, and two preventive degradation thresholds are scheduled depending on the system state. The expected cost per unit time is derived through the joint optimization of the two preventive thresholds as well as the inspection interval. A numerical example is presented to illustrate the maintenance model. A CONDITION-BASED MAINTENANCE MODEL BASED ON A TWO-STAGE DEGRADATION PROCESS
|
19h30-22h00 | Welcome cocktail: Restaurant Le Téléphérique, Bastille Hill |
9h00-9h45 | Emanuele BORGONOVO Reliability Importance Measures: A Mathematical Viewpoint Importance measures are an essential tool in reliability analysis. They provide guidance in applications ranging from redundancy allocation, to maintenance optimization. We present an overview of importance measures, with focus on two notions that characterize their definition: the notion of criticality and of time consistency. The notion of criticality plays a role in the construction of the Birnbaum and of the Barlow-Proschan importance measures. The notion of time consistency is a recent notion that we discuss in association with a new reliability importance measure based on the effect of component failures on the system mean time to failure. We also highlight the distinction between time dependent and time independent importance measures. We conclude with future research perspectives. Reliability Importance Measures: A Mathematical Viewpoint Chair: Min XIE, Room A | |||||
10h00-11h00 |
Maintenance Modeling and Analysis - 1
Maintenance Modeling and Analysis - 1 Dragan BANJEVIC, Ji Ye Janet LAM
In this paper we propose a condition-based inspection planning policy that uses an amortized preventive replacement cost payable over the replacement cycle. We compare this model with traditional condition-based maintenance models that assume periodic inspections. We use a proportional hazards model for risk of failure and a Markovian process to model the system covariates. The decision policy provides an inspect-or-replace recommendation depending on the current age and conditions (covariates), and also the optimal time for the next inspection. The policy shows good results in comparison with other policies. The methodology is illustrated with an example from industry. A non-periodic inspection policy with amortized preventive maintenance costs
Michiel UIT HET BROEK, Ruud TEUNTER, Bram DE JONGE, Jasper VELDMAN
Many large plants like power plants and refineries, commonly use so-called turnaround maintenance policies. In such policies, the entire system is shut down for a certain period and the whole system is maintained at once. Such policies allow for the maintenance activities to be clustered and planned long in advance, thereby minimizing system downtime as well as logistics costs. However, the time between consecutive turnarounds is often large and machines may deteriorate faster then expected. In such situations, an interesting question is whether it can be profitable to reduce production rates in order to avoid the need for maintenance before a turnaround. The current maintenance literature typically assumes that machines always produce at the maximum production rate and that we therefore cannot influence the deterioration rate. However, there are many real life situations where we can adjust production rates. For example, wind turbines can decelerate which results in lower production rates as well as reduced deterioration rates. In this study, we consider a single unit with the option to adjust production rates based on condition information. We conclude that the flexibility to operate with different production rates reduces the probability of a failure while the expected total production increases. Condition-based Production Planning
Yun ZHANG, Anne BARROS, Antoine RAUZY
This article discusses a CBM model with a multi-unit system subject to exponential degradation and failures. One unit is continuously monitored while the other unit is periodically inspected. We consider maintenance action implemented with a delay, and the time to complete maintenance is non-negligible. Two state-based models are built to assess maintenance policies and reliability indicators at system level. It shows that an implicit state-based model is superior than an explicit one in representing the time and delay to repair, reusing modelling patterns and evaluating maintenance strategies. Implicit modelling for condition based maintenance of multi-unit systems
|
Order statistics and extremes
Order statistics and extremes Agnieszka GORONCY, Mariusz BIENIEK
The paper concerns determining the lower mean-variance bounds on the expectations of the generalized order statistics (gOS) which are based on the initial distributions with the decreasing density (DD) and decreasing failure rate (DFR). The bounds are determined using so called projection method and are given along with the attainability conditions. Theoretical results are exemplified by one of the special cases of gOS, which are progressively type II censored order statistics. They find many applications in reliability studies, in particular in life testing experiments. We provide the numerical values of the bounds along with the interpretation. Lower bounds on expected generalized order statistics from DD and DFR distributions with applications in reliability
Butucea CRISTINA, Delmas JEAN-FRANçOIS, Fischer RICHARD, Dutfoy ANNE
In this paper we focus on the modelling of d-dimensional random vectors with fixed marginals and components which verify an ordering constraint almost surely. We aim to calculate the distribution which maximizes the Shannon-entropy under these constraints. We provide the solution with explicit formulas to the maximum entropy problem, whose density is a product of univariate functions on the support of the random vector.
We propose a nonparametric approach to estimate the density of the optimal joint distribution based on an available sample: we use an exponential model based on series of quasi-orthogonal polynomials specially designed to suit this particular structure. We thus achieve a fast convergence rate which depends only linearly on the dimension of our random vector, and does not suffer from the curse of dimensionality.
We apply the proposed method to two industrial application cases involving the modelisation of the Young modulus with respect to the temperature for numerical simulation of welding and the estimation of mechanical flaw dimensions in a component of a power plant. In both cases, we compare the results obtained with the maximum entropy approach to previously considered modelling schemes. Maximum entropy distribution of order statistics with given marginals with application to nuclear engineering
Clément ALBERT, Anne DUTFOY, Stéphane GIRARD
In the risk management context, the extreme-value methodology consists in estimating extreme quantiles - one hundred years return period or more -
from an extreme-value distribution adjusted on data. In this communication, we quantify the extrapolation limits associated with extreme quantile estimations.
To this end, we focus on the framework of the block maxima method and
we study the behaviour of the relative approximation error of a quantile estimator dedicated to the Gumbel attraction domain.
We give necessary and sufficient conditions for the error to converge towards zero and we provide a first order approximation of the latter.
We show that extrapolations can be greatly limited depending on the data distribution. On the extrapolation limits of extreme-value theory for risk management
|
Degradation modelling and analysis - 2
Degradation modelling and analysis - 2 Qingqing ZHAI, Piao CHEN, Zhisheng YE
The Wiener process model has found its wide application as an important tool in degradation modelling. In a population of products, the degradation rate as well as the process variability may vary from unit to unit. Most exiting studies consider the random-effects Wiener process model using normal distributions to account for the heterogeneous degradation rate. However, the normal random-effects Wiener process model only considers the unit-to-unit variation of the degradation rate, but the diffusion coefficient is assumed constant. In addition, the normal distribution has a support over the whole range of the real line, which is conflicting to the physical interpretation of the drift rate. To address these problems, we propose a novel Wiener process model that permits an inverse Gaussian (IG) distribution to characterize the random effects. The IG random-effects Wiener process model overcomes the disadvantages of the existing models and provides more flexibility in the degradation modelling. An expectation-maximization algorithm is proposed for the model to obtain the maximum likelihood estimates. The proposed model is applied to a laser degradation dataset to illustrate its applicability and effectiveness. A NEW WIENER PROCESS MODEL FOR DEGRADATION MODELLING
Alexandra BORODINA, Dmitry EFROSININ, Evsey MOROZOV
We consider a degradation process composed by a sum of the successive phases,
where repair is used to prevent a sudden failure. For an optimal control
of such a system, calculation of the failure probability and other
characteristics
are critically important. If the degradation process is Markovian, then the required steady-state performance measures are
analytically available, however it is a problem for a non-Markov process. As a result, in general case we need to use simulation.
We treat the degradation process as regenerative process and consider a modification of the splitting method
for speed-up estimation of the stationary performance of the degradation process.
The approach is illustrated by construction the point and interval estimates by regenerative simulation. The failure probability speed-up estimation in controllable degradation system
Dan XU, Mengli XING
In this paper, considering the dependency between the degradation distributions, a modeling method based on time-varying copula is put forward to construct the joint degradation distribution. Specially, Brownian motion with drift is introduced to model the degradation process, then time-varying copula is presented to modeling the dependent structure among various degradation. Finally, the method is verified by using the circuit board as a numerical example, meanwhile, the contrast results of the parameter estimation from both constant and time-varying copulas are illustrated to demonstrate the effective application of the proposed time-varying copula model. MULTIVARIATE DEGRADATION MODELING METHOD BASED ON TIME-VARYING COPULA
|
Signatures - 2
Signatures - 2 Mariusz BIENIEK, Marco BURKSCHAT
We study conditions for unimodality of the lifetime distribution of a coherent system, when the ordered component lifetimes in the system are described by generalized order statistics. Results for systems with independent and identically distributed lifetimes of components are included in this setting. In particular, coherent systems with strictly bimodal density functions are presented in the case of independent standard uniform distributed lifetimes of components. On unimodality and bimodality properties of the lifetime distribution of some coherent systems
Konul BAYRAMOGLU KAVLAK
In this paper the reliability and the mean residual life (MRL) functions of a system with active redundancies at the component and system levels are investigated. In active redundancy at the component level, the original and redundant components are working together and lifetime of the system is determined by the maximum of lifetime of the original components and their spares. In the active redundancy at the system level, the system has a spare, and the original and redundant systems work together. The lifetime of such a system is then the maximum of lifetimes of the system and its spare. The lifetimes of the original component and the spare are assumed to be dependent random variables. Reliability and mean residual life functions of coherent systems in an active redundancy
Somayeh ZAREZADEH, Majid ASADI
We consider a three-state network with states: up, partial performance, anddown. It is assumed that the network remains in up state, for a random time $T_1$, and then moves to the state of partial performance until it fails at randomtime T. We present signature-based expressions for the Kullback-Leibler (K-L)divergence, and mutual information of the state lifetimes T and $T_1$. It is shownthat the K-L information, and mutual information between $T_1$ and T dependonly on the network structure (i.e., depends on the signature matrix of thenetwork). Signature-based stochastic comparisons are also made to comparethe K-L of the state lifetimes in different three-state networks. Upper and lowerbounds of the K-L and mutual information between $T_1$ and T are investigated. Some illustrative examples are also provided. Signature-based information measures in three-state networks
|
Network reliability - 1
Network reliability - 1In this paper we present a brief review of existing technologies to organize wireless communication networks. The existing approaches to determine and calculate reliability of such networks are explored. We pay attention to how disunited the existing approaches are, how insufficiently formalized they are when one chooses a calculation method. We also narrow down a class of systems for which the existing methods are applicable. Modern communication networks are characterized by complex architectural solutions, unification of various technologies, instabilities, structures that are mobile in space and time. These features require a comprehensive, systematic approach to solve the problems of calculating the reliability indicators, the approach that will allow us to take into account the most essential features of real life systems.
Key words: Wireless communication networks, network reliability, unstable structure, the probability of access to the network. Methods to estimate reliability of wireless communication networks
Christian TANGUY
Successful operation of a mobile network requires to preserve the active call when a customer
moves from one cell to a neighboring one. The process of switching from a base station (the antenna) to the next one,
called handover or handoff, involves resources that might otherwise be used for the establishment of new calls.
Knowledge of the number of departures from a cell is important for the assessment of the probability of cut calls and the adequate provisioning of network resources.
The Random WayPoint (RWP) model is currently the preferred stochastic description of the mobility pattern of customers. We used in a previous work a circular RWP domain, and showed that the total handover rate for an arbitrary cell is simply $\Lambda_{\rm total} = \frac{{\mathcal P}}{\pi \, {\rm E}(1/v)}$, where ${\mathcal P}$ is the cell perimeter, and ${\rm E}(1/v)$ the expectation value of $1/v$ over the velocity distribution. We consider here for the RWP domain a right-angled isosceles triangle, which is not invariant by rotation. We explain why, even though the previous result does not apply, it still provides a very good approximation. Handover Rate in Wireless Networks and Random WayPoint Mobility Model: An Almost Simple Result
Natsumi TAKAHASHI, Tomoaki AKIBA, Hisashi YAMAMOTO, Xiao XIAO
Many networks have been applied extensively in the real world, for example,
scheduling of a production and distribution management system and search
system for optimal routes in the Internet services. These networks are formulated as a multi-objective network which has multi-criteria. In this study, we
obtain optimal paths for such a multi-objective network. Extended Dijkstra's
algorithm is effective in obtaining Pareto solutions of multi-objective network.
However, this algorithm takes large memory area to obtain optimal paths when
criteria of network are increase. Therefore, our previous study proposed the
algorithm that standard path generated less search space than the extended
Dijkstra's algorithm. This study evaluates the efficiency of reducing search
space. We conduct experiments and compare computing times of the proposed
algorithms which dier in the number of standard paths. Efficiency of Reducing Search Space by Standard Paths in Multi-objective Network
|
Decision making in reliability
Decision making in reliability Phuc DO, Christophe BERENGUER, Emanuele BORGONOVO
Reliability importance measures are widely used for decision-aiding in reliability studies, risk analyses and maintenance optimization. We propose a novel time-dependent importance measure for multi-non repairable component systems. The proposed importance measure of a component/group of components is defined as its ability in improving the system reliability given the components' condition. To take into account economic aspects (e.g., maintenance costs, economic dependence between components and the cost benefit thanks to maintenance operations), an extension of the proposed importance measure is then investigated. Thanks to these proposed importance measures, the component/group of components can be "optimally" selected for preventive maintenance regarding to the reliability criteria and/or the financial issues. A numerical example of a 5-component system is introduced to illustrate the use and the advantages of the proposed importance measures. CONDITIONAL RELIABILITY-BASED IMPORTANCE MEASURES
Eugenia STOIMENOVA
A fixed sample size procedure for selecting the $t$ best system components is considered. The probability requirement is set to be satisfied under the indifference zone formulation. In order to minimize the average losses from misclassification, we use loss function which is sensitive to the number of misclassifications. The upper bound of the corresponding risk is derived for location parameter distributions. The risk function for the Least Favorable Configuration is derived in an integral form for a large class of distribution functions. On the upper bound of the risk in selection of the t best items
Matieyendou LAMBONI
A deep exploration of mathematical models in reliability analysis is often made by using model-free methods. Variance-based sensitivity analysis and multivariate sensitivity analysis are ones of them and aim at apportioning the variability of the model output(s) into input factors and their interactions. Sobol's first-order index of a single factor and of a group of factors, which accounts for the effect of input factor(s), serves as a practical tool to assess the order of interactions among input factors. In this abstract, we propose an optimal estimator of the (non-normalized) first-order index, including its rate of convergence. The optimal estimator of the non-normalized index makes use of a kernel of degree ($p,\, q$). We also provide the statistical properties of the estimator of the first-order index, including the asymptotic confidence bounds. An illustration to a flood risk model shows that our estimator allows for improving the estimations of the first-order indices. Statistical inference on sensitivity indices of mathematical models: an illustration to a flood risk model
|
11h00-11h30 | Coffee break | |||||
11h30-13h00 |
Panel Session: Is Reliability a New Science ?
Panel Session: Is Reliability a New Science ? Mark BROWN, Regina LIU, Sheldon M. ROSS, Nozer D. SINGPURWALLA
A book currently under review asserts in its title, and based on the claim by Academician Boris Gnedenko, that “Reliability is a New Science”. This appears to be a heroic claim, with the origins of reliability analysis going back at least as far as statistical process control developed at Bell Laboratories in the 1920s, and von Braun's rocket science of World War II. However, the question of whether reliability is indeed a science is an interesting one and warrants serious discussion. This session gives a panel of academics and editors an opportunity to discuss this question and related questions, with active participation from the audience. The panel discussion is non-technical. Panel session: Is Reliability a New Science ?
|
Extremes in Reliability and Safety
Extremes in Reliability and Safety M. Ivette GOMES, Paula REIS, Luisa CANTO E CASTRO, Sandra DIAS
In reliability theory any coherent system can be represented as a series-parallel (SP) or a parallel-series (PS) system. Its lifetime can thus be written as the minimum of maxima or the maximum of minima. For large-scale coherent systems it is sensible to assume that the number of system components goes to infinity and work with the possible non-degenerate extreme value distributions either for maxima or for minima to get adequate lower and upper bounds for the system reliability. But rates of convergence to these limiting laws are often slow and penultimate approximations can provide a faster rate of convergence. The identification of the possible limit laws for the system reliability of homogeneous PS systems is sketched and the gain in accuracy is assessed whenever a penultimate approximation is used instead of the ultimate limiting one.
Reliability Bounds through Penultimate Approximations for Extremes
Dutfoy ANNE, Sibler ALAIN
In this paper, we provide a tutorial on multivariate extreme value methods which allows to estimate the risk associated with rare events occurring jointly. We draw particular attention to issues related to extremal dependence and we insist on the asymptotic independence feature. We apply the multivariate extreme value theory on two data sets related to hydrology and meteorology: first, the joint flooding of two rivers, which puts at risk the facilities lying downstream the confluence; then the joint occurrence of high speed wind and river flooding, which might affect the safety of some facilities. Multivariate Extreme Value Theory - A Tutorial with Applications to Hydrology and Meteorology
Liliana CUCU-GROSJEAN, Adriana GOGONEL
The time behaviour of Cyber-Physical Systems rely on the
execution time of programs composing the cyber part of such
systems. In our paper we present a measurement-based method that
provide results in absence of sufficiently large intervals of
simulation. Our result is based on the utilisation of Extreme Value Theory. Probabilistic foundations for the time prediction of cyber-physical systems
|
Special Session in Honor of Wenbin Wang - 1
Special Session in Honor of Wenbin Wang - 1 Phil SCARF, Cristiano CAVALCANTE, Lola BERRADE
We calculate the cost-rate of an inspection policy for a protection or preparedness system. This system operates on demand, and may be good, defective or failed. Inspections are imperfect and false positives and negatives are possible. Further, the false negative probability depends on the system state, so that the inspection test is more reliable when the system is failed than when it is defective IMPERFECT INSPECTION OF A PREPAREDNESS SYSTEM WITH A DEFECTIVE STATE
Fei ZHAO, Fengfeng XIE
Considering the stochastic dependence and the failure sequence among units for multi-unit systems, this paper presents an inspection policy for a two-dependent-unit system using the two-stage delay time technique, in which the system is checked periodically. Once units are identified to be in the defective state and the failed state by an inspection, repair should be done. Then, an optimization model is established using the renewal-award theory by taking two independent renewal scenarios. The results obtained from the numerical example illustrate the application of the proposed model. OPTIMAL INSPECTION POLICY FOR A TWO-DEPENDENT-UNIT SYSTEM BASED ON THE DELAY-TIME TECHNIQUE
Xiaoyang MA, Rui PENG, Wenjuan ZHANG, Xiaodong ZHANG
To detect possible indications of failure in plant systems, periodic inspections are often carried out. This paper considers a single-unit system subject to two types of failure: a two-stage delay-time failure and a traditional 0-1 logic failure. Unlike other researches – which consider these two types of failure to be independent from one another – this paper assumes that the two failure modes are correlated. A copula function is utilized to describe the joint distribution. Periodic inspections are used to detect the defective stage of the two-stage failure mode, while preventive replacement is used to avoid possible failure in the 0-1 logic mode. The renewal process of this system is analyzed and the expected long-run cost per unit time (ELRCUT) is derived. The optimal inspection period and preventive replacement interval that minimize ELRCUT are studied. Numerical examples are presented to illustrate the proposed model. A preventive maintenance model subject to two types of correlated failures
|
Multi-State System Reliability - 2
Multi-State System Reliability - 2 Yonit BARRON, Uri YECHIALI
Consider a deteriorating repairable Markovian system with N stochastically independent identical units. The lifetime of each unit follows a discrete phase-type distribution. There is one online unit and the others are in standby status. In addition, there is a single repair facility and the repair time of a failed unit has a geometric distribution. The system is inspected at equally spaced points in time. After each inspection, either repair to a better state or a full replacement is possible. We consider state-dependent operating costs, repair costs that are dependent on the extent of the repair, and failure penalty costs. Applying dynamic programming, we show that under reasonable conditions on the system's law of evolution and on the state-dependent costs, a generalized control-limit policy is optimal for the expected total discounted criterion for both cold standby and warm standby systems. GENERALIZED CONTROL-LIMIT PREVENTIVE REPAIR POLICIES FOR DETERIORATING COLD STANDBY MARKOVIAN SYSTEMS
Juan Eloy RUIZ-CASTRO, Mohammed DAWABSHA
A complex cold standby system subject to repairable and non-repairable failures is modeled through a Discrete Markovian Arrival Process with marked arrivals (D-MMAP). Initially, the system is composed of a general number of units, K, and RK repairpersons. The online unit can undergo an internal failure (repairable or non-repairable) due to wear or an external shock. When one external shock occurs, it can provoke a modification in the internal behavior of the online unit or a fatal failure. A non-repairable failure implies that the unit is removed. The system continues working with a less unit. If it occurs, the number of repairpersons is also modified. Some interesting measures, such as the mean operational time and the mean number of events up to a determine time have been worked out. Different costs and rewards are included in the model and the expected cost up to a certain time is calculated. One analysis of the number of repairpersons according to the number of units in the system has been performed by considering the total net reward per unit of operational time. The model is built in a matrix-algorithmic form which eases the computational implementation. The work has been implemented computationally with Matlab. MODELING A REDUNDANT MULTI-STATE SYSTEM WITH LOSS OF UNITS THROUGH A MMAP
Gregory LEVITIN, Liudong XING
This paper presents a new methodology of modeling dynamic performance of multi-state systems consisting of repairable elements. Performance, time-to-failure and repair time distributions of the element can be different and arbitrary. A discrete numerical algorithm is proposed for evaluating instantaneous availability of each element, which further defines stochastic process of the element's performance. A universal generating function technique is then used for assessing such system performance metrics as expected system performance, expected probability of meeting system demand, expected amount of unsupplied demand over a particular mission time and expected time needed to perform a given amount of work, expected connectivity (depending on the type of considered systems). Examples of series parallel systems with arbitrary structure and consecutively connected systems are presented. The method allows finding optimal element loading, optimal system structure and optimal element sequencing. DYNAMIC PERFORMANCE AND RELIABILITY OF MULTI-STATE SYSTEMS CONSISTING OF ELEMENTS WITH ARBITRARY TIME-TO-FAILURE AND REPAIR TIME DISTRIBUTIONS
|
Cure Models
Cure Models Maïlis AMICO, Catherine LEGRAND, Ingrid VAN KEILEGOM
In survival analysis it often happens that a certain fraction of the subjects under
study never experience the event of interest, i.e. they are considered 'cured'. In the
presence of covariates, a common model for this type of data is the mixture cure
model, which assumes that the population consists of two subpopulations, namely
the cured and the non-cured ones, and it writes the survival function of the whole
population given a set of covariates as a mixture of the survival function of the
cured subjects (which equals one), and the survival function of the non-cured ones.
In the literature one usually assumes that the mixing probabilities follow a logistic
model. This is however a heavy modeling assumption, which is often not met
in practice. Therefore, in order to have a exible model which at the same time
does not suer from curse-of-dimensionality problems, we propose in this paper a
single-index model for the mixing probabilities. For the survival function of the
non-cured subjects we assume a Cox proportional hazards model. We estimate
this model using a maximum likelihood approach. We also carry out a simulation
study, in which we compare the estimators under the single-index model and under
the logistic model for various model settings, and we apply the new model and
estimation method on two data sets. The single-index/Cox mixture cure model
Fotios MILIENOS
In this work, we propose a re-parameterization of a recently introduced family of cure rate models; the new family has as special cases, among others, the binary, the promotion time and the negative binomial cure rate model. Some of the properties of the proposed model and the problem of the estimation of model parameters are also discussed.
A new family of cure rate models
Laurent BORDES, Olayide BOUSSARI, Valérie JOOSTE
A general methodology is proposed for testing the null hypothesis that an excess hazard rate model, with or without covariates, belongs to a parametric family. Estimating the excess hazard rate function parametrically through the maximum likelihood method and non-parametrically (or semi-parametrically) we build a
discrepancy process which is shown to be asymptotically Gaussian under the null hypothesis. Based on this result we are able to build some statistical tests in order to decide wether or not the null hypothesis is acceptable. We illustrate
our results by the construction of chi-square tests which the behavior is studied through a Monte-Carlo study. Then the testing procedure is applied to a population based colon cancer data.
Keywords: Excess hazard model; Maximum likelihood estimation; Semiparametric estimation; Covariates; cancer. Testing parametric excess hazard models allowing cure rate
|
Sensitivity Analysis in Reliability
Sensitivity Analysis in Reliability Alexander ANDRONOV, Vladimir RYKOV, Vladimir VISHNEVSKY
A system with two hot redundant renewable channels is considered. Functioning of both channels are described by independent alternating processes. Lifetimes of the channels are exponentially distributed random variables. Renewal times have general absolutely continuous distribution. The system is working if at least one component is working. Transient and stationary regimes are considered. The system reliability function is studied. RELIABILITY OF A SYSTEM WITH TWO PARALLEL RENEWABLE CHANNELS
Dmitry EFROSININ, Mais FARKHADOV
The paper provides sensitivity analysis of performance and reliability measures in an unreliable queueing system with multiple servers and constant retrial discipline.
The servers can differ in service and reliability characteristics.
We have proved the insensitivity of the mean number of customers in the system to the type of allocation policy for equal service rates and confirmed a weak sensitivity in a general case of unequal service rates. A further sensitivity analysis is conducted to investigate the effect of changes in system parameters on a reliability function,
a distribution of the number of failures of a server and a maximum queue length during a life time. Sensitivity analysis of performance and reliability measures in a multi-server retrial unreliable queueing system
Vladimir RYKOV, Dmitry KOZYREV
Reliability function of double redundant hot standby reparable reliability model with exponential life and general repair time distributions is considered. The problem of its sensitivity to repair time distribution is discussed. On reliability function of double redundant system with general repair time distribution
|
13h00-14h15 | Lunch | |||||
14h15-19h30 | Social programme |
9h00-9h45 | Narayanaswamy BALAKRISHNAN Cure Models In this talk, I will first introduce a mixture cure rate model as it was originally introduced. After that, I will formulate cure rate model in the context of competing risks and present some flexible families of cure rate models. I will then describe various inferential results for these models. Next, as a two-stage model, I will present destructive cure rate models and discuss inferential methods for it. In the final part of the talk, I will discuss various other extensions and generalizations of these models and the associated inferential methods. All the models and inferential methods will be illustrated with simulation studies as well as some well-known melanoma data sets. Cure Models Chair: William Q. MEEKER, Room A | |||||
9h45-10h30 | Sophie MERCIER Probabilistic Construction and Properties of Gamma Processes and Extensions Standard gamma processes are widely used for cumulative deterioration mod- eling purpose and to make prediction over a system future behavior. The point of this presentation is to make some review on its probabilistic construction through series representation and on its jump behavior. Next, due to some pos- sibly restrictive properties of a gamma process in an applicative context, some (univariate) extensions from the literature will be exposed. Finally, based on the fact that it is more and more frequent that several deterioration indicators are observed at the same time, some possible paths for multivariate extensions will be provided. Probabilistic Construction and Properties of Gamma Processes and Extensions Chair: William Q. MEEKER, Room A | |||||
10h30-11h00 | Coffee break | |||||
11h00-12h30 |
Reliability of Complex Systems
Reliability of Complex Systems Kristina ROGNLIEN DAHL
We consider the problem of managing a hydroelectric power plant system. The system consists of N hydropower dams, each with a maximum production capacity. The inflow to the system is some stochastic process, representing the precipitation to each dam. The manager can control how much water to release from each dam at each time. She would like to choose this in a way which maximizes the total revenue from the initial time to some terminal time T . The total revenue of the hydropower dam system depends on the price of
electricity, which is also a stochastic process. The manager must take this price process into account when controlling the draining process. However, we assume that the manager only has partial information of how the price process is formed. She can observe the price, but not the underlying processes determining it. By using the conjugate duality framework of Rockafellar, we derive a dual problem to the problem of the manager. This dual problem turns out to be simple to solve in the case where the price process is a martingale or submartingale with respect to the filtration modelling the information of the dam manager. Management of a hydropower system via convex duality
Jørund GåSEMYR, Bent NATVIG
Multistate monotone systems are used to describe technological
or biological systems when the system itself and its components
can perform at different operationally meaningful levels.
This generalizes the binary monotone systems used in
standard reliability theory. In this paper we consider the availabilities
of the system in an interval,
i.e. the probabilities that the system performs above
the different levels throughout the whole interval. In complex systems
it is often impossible to calculate these availabilities
exactly, but if the component performance processes are independent,
it is possible to construct lower bounds based on
the component availabilities to the different levels over the interval.
In the present paper we show that by treating
the component availabilities over the interval as if they were
availabilities at a single time point
we obtain an improved lower bound.
Unlike previously given bounds,
the new bound
does not require the identification
of all minimal path or cut vectors.
Improved availability bounds for binary and monotone systems with independent component processes
Anne BARROS, Nicolas LEFEBVRE
A virtual age model is developed to assess the effect of tests and preventive maintenance on Safety Instrumented Systems. Numerical results are presented to illustrate the added value of the model, both from a statistical point of view (choice of the lifetime laws) and from a probabilistic point of view (calculation of the system unavailability). Contribution to modeling of test effetcs for Safety Instrumented Systems
|
Big Data in Reliability
Big Data in Reliability Wujun SI, Qingyu YANG, Xin WU
This research conducts reliability analysis of advanced high strength dual-phase steel utilizing material microstructure information. A new statistical model called distribution-based functional linear model is proposed. A maximum penalized likelihood method is developed to estimate the model parameters and overcome the overfitting issue. Physical experiments are conducted for verification and illustration. Reliability Analysis of Advanced High Strength Dual-phase Steels by Utilizing Material Microstructure Information
Vitali VOLOVOI
No abstract since last minute change The Impact of Internet of Things on Maintenance Modeling.
Mei-Ling Ting LEE, George WHITMORE
Many health or engineering systems experience gradual degradation while simultaneously being exposed to a stream of random shocks of varying magnitude that eventually cause failure event when the shock exceeds the residual strength of the patient or system. This failure mechanism is found in diverse fields of application. For example, the underlying process of osteoporotic hip fractures can be modeled as a composite of a chronic degradation process for latent skeletal health combined with a random stream of shocks from external traumas, which taken together trigger fracture events.
A tractable new family of shock-degradation models will be presented. This family has the attractive feature of defining the failure event as a first passage event and the time to failure as a first hitting time (FHT) of a threshold by an underlying stochastic process. Such FHT models are useful in practical applications because, first, they usually describe the failure mechanism in a realistic manner and, second, they naturally accommodate regression structures that subsequently can be analyzed and interpreted using threshold regression methods.
The shock-degradation family includes a wide class of underlying degradation processes. We derive the survival function for the shock-degradation process as a convolution of the Fréchet shock process and any candidate degradation process that possesses stationary independent increments. Statistical properties of the survival distribution will be discussed.
A Shock-degradation Model for Time-to-event Analysis
|
Modeling and Analysis of Correlated Failure Time Data: Copulas and Spatial aspects
Modeling and Analysis of Correlated Failure Time Data: Copulas and Spatial aspects Roel BRAEKERS, Leen PRENEN, Luc DUCHATEAU
In the analysis of unbalanced clustered survival data, two types of models are commonly used when we are interested in the association between the lifetimes: frailty models and copula models. Frailty models assume that conditional on a common frailty term for each cluster, the hazard functions of individuals within that cluster are independent. These unknown frailty terms express the association of individuals within a cluster. Copula models on the other hand assume that the joint survival function of the individuals within a cluster is given by a copula function, evaluated in the marginal survival function of each individual. Hereby the copula function describes the association between the lifetimes within a cluster. A major disadvantage of the present copula models is that the size of the clusters must be balanced and small to set up manageable estimation procedures. We describe in this manuscript a copula model for unbalanced clustered survival data based on the class of Archimedean copulas with completely monotone generator. After introducing estimators for the different model parameters, we illustrate the method on a data set containing the time to first insemination in dairy cows, with cows clustered in herds. Modeling unbalanced clustered multivariate survival data by Archimedean copula functions
Yildiz YILMAZ, Candemir CIGSAR, Hensley MARIATHAS
Statistical methods based on the strong independence assumption of dependent gap times between event occurrences may lead to misleading inference on important features of recurrent events. We discuss copula modelling approach for dependent gap times of recurrent event processes. We obtain parametric and semiparametric estimates of the marginal distributions of the gap times under copula model with carryover effects. The use of copulas for modelling dependent gap times of recurrent event data
Mitra FOULADIRAD , Franck CORSET, Christian PAROISSIN
In this paper, we consider n components displayed on a structure (e.g., a steel plates). We define a Cox-type model for the hazard function which includes spatial dependency between components. The state (non-failed or failed) of each component is observed at some inspection times. From this data, we propose to use the SEM algorithm to estimate the parameters of the model. A study based on numerical simulation is then provided.
A Cox model for component lifetimes with spatial interactions
|
Multi-State System Reliability - 3
Multi-State System Reliability - 3 Liwei CHEN
Electronic Noses have been developed for qualitative and quantitative analysis of complex odor samples. As the sensing equipment, the sensor array is the most important component of E-nose. Unfortunately, it is usual the most vulnerable and un-repairable part. Thus, the sensor array which generally decides the E-Nose stability requires a high level of reliability. This paper models reliability of sensor array system which includes several independent odor sensors. Each sensor consists of 3 main units: main sensing unit, reference unit and auxiliary unit. According to the different failure situations of these units, odor sensor and sensor array system respectively have 3 states. Based on MFT (multi- state fault tree, MFT) and BDD (binary decision diagram, BDD), a reliability analysis method is provided for this model. And the possibility of uncovered failure caused by a special failure sequences is discussed in this paper. Finally, a case study is given to illustrate application of the proposed analysis method. A BDD-BASED APPROACH FOR RELIABILITY ANALYSIS OF MULTI-STATE SENSOR ARRAY SYSTEM
Fumio OHI
When we improve performance of a system, it is pursued by improving the components. An importance measure of a component gives us an index for the preference order for improving the components and is also used for risk and safety analysis of a big system as nuclear power reactor.
In this paper, based on an extension of the critical state vectors of a binary state system, we propose importance measures for a multi-state system, incorporating the stochastic dynamics of the components. The stochastic processes are not assumed to be monotone, and then include a case of a maintained system as a special case. The state spaces are mathematically assumed to be partially ordered sets.
The set of the critical state vectors of a component provides a critical circumstance where the component contributes crucially to the system.
The basic idea of the definition is to count the number of the systems’ pre-defined specific transitions triggered by the component i’s specific transition. We also try to derive the stationary importance measure for the components of which stochastic dynamics are defined to be Markov chain valued in the partially ordered state space. Stochastic Dynamical Importance Measures of a Multi-state System
Heping JIA, Yi DING, Hanlin LIU, Yonghua SONG
The electricity capacities of customers in demand response (DR) have been utilized to provide reserves for power systems due to the development of advanced infrastructures. However, several uncertainties, especially random failures from cyber physical systems (CPS) and stochastic behaviors of customers, inevitably exist and impact the reliability of power systems. In this paper, the reliability models for both CPS and customers are proposed utilizing Lz transform techniques. We consider a hierarchical decentralized control framework for the CPS. The electricity consumption characteristics and participation performances of customers are embedded in the proposed model. The time-varying reliability of systems are evaluated utilizing the proposed method. POWER SYSTEM RELIABILITY EVALUATION CONSIDERING UNCERTAINTIES OF CYBER PHYSICAL SYSTEM AND DEMAND RESPONSE
|
Censoring Methodology
Censoring Methodology Katherine DAVIES, William VOLTERMAN
In assessing data from lifetime studies, the presence of competing causes of failures needs to be addressed. Commonly referred to as competing risks, in this paper we consider progressively Type-II right censored competing risks data when the lifetimes are assumed to come from a linear exponential distribution. We develop likelihood inference and numerically investigate the performance of the associated estimators via an extensive Monte Carlo simulation study. Progressively Type-II Censored Competing Risks Data from the Linear Exponential Distribution
Ping Shing CHAN, Yee Lam MO
The lifetime of a coherent system of n components with identical exponential lifetimes is considered. We derive its density function when the joint distribution of these n components is represented by the Gumbel copulas. Then the likelihood function of the dependence parameter in the copulas and the rate parameter of the component lifetime based on a random sample of m system lifetimes is constructed. Unfortunately the likelihood is an unbounded function of the dependence parameter and maximum likelihood estimator does not exist. Therefore we analyze the data via Bayesian inference by assuming the prior distribution of the parameters to be known. The posterior distribution of the unknown parameters is obtained by the Metropolis-Hastings-within-Gibbs algorithm. The purposed method will then be illustrated by a simulated example. Bayesian Inference for the System Lifetimes under Gumbel Copulas
Marius HERMANNS, Erhard CRAMER
In this paper, point and interval estimates for the scale parameter of the component lifetime distribution of a k-out-of-n system are obtained when the component lifetimes are supposed independently and identically exponentially distributed and the system lifetimes are (possibly) subject to progressive censoring. It is shown that the maximum likelihood estimator (MLE) of the scale parameter is unique in this setting and that it can be computed by a fixed-point iteration procedure. It is illustrated that the fixed-point approach is superior to the Newton-Raphson method which does not converge for any initial value. Furthermore, exact confidence intervals for the scale parameter are discussed based on progressively Type-II censored system lifetimes. LIKELIHOOD INFERENCE BASED ON PROGRESSIVELY CENSORED K-OUT-OF-N SYSTEM FAILURE DATA WITH EXPONENTIAL COMPONENT LIFETIMES
|
Maintenance Modelling
Maintenance Modelling Inma T. CASTRO
This presentation compares two imperfect repair models for a degrading system modelled according to a non-homogeneous gamma process. In the first model, the repair reduces the $\rho_1\%$ of the degradation accumulated by the system from the last maintenance action. In the second model, the repair removes the $\rho_2\%$ of the age accumulated by the system since the last maintenance action. The stochastic processes that describe the degradation of the maintained system are showed. Under the assumption that the system degradation follows a non homogeneous gamma process with scale parameter described using a power law, an equivalence property between the two types of repair is defined. Under this equivalence property, the expectation and the variance of the two resulting processes are compared and some stochastic properties are also showed. Imperfect repair models in a degrading system. The equivalent case
Mahmood SHAFIEE, Maxim FINKELSTEIN
Extending the service life of ageing systems has been of great interest to asset managers in recent years. In this paper, we present an optimisation model to determine the optimal length of life extension and preventive maintenance (PM) strategy such that the expected total cost of systems during their extended life phase is minimized. OPTIMAL LIFE EXTENSION PERIOD AND MAINTENANCE STRATEGY DECISIONS FOR AGEING SYSTEMS
Maxim FINKELSTEIN, Ji Hwan CHA, Gregory LEVITIN
We consider preventive maintenance (age replacement) of items operating in a random environment modeled by a Poisson process of shocks. An item is replaced either on failure or on the predetermined replacement time, whichever comes first. Each shock in our stochastic model has a double effect. On one hand, it acts directly on the failure rate of an item, which results in the corresponding stochastic failure rate process. On the other hand, each shock causes additional ‘damage’, which can be attributed, e.g., to a short drop in the output of an item. ON SOME MAINTENANCE STRATEGIES FOR SYSTEMS OPERATING IN A RANDOM ENVIRONMENT MODELED BY SHOCKS
|
12h30-14h00 | Lunch | |||||
14h00-15h30 |
Coherent Reliability System and Recurrent Events
Coherent Reliability System and Recurrent Events Paul KVAM, Byeong Min MUN, Suk Joo BAE
From our investigation of complex repairable artillery systems that include several failure modes, we derive a superposed process based on a mixture of nonhomogeneous Poisson processes in a minimal repair model. This
allows for a bathtub shaped failure intensity that models artillery data better than currently used methods. Method of maximum likelihood is used to estimate model parameters and construct condence intervals for the cumulative intensity of the superposed process. Superposed Log-Linear Process to Model Repairable Artillery Systems
Marco BURKSCHAT, Jorge NAVARRO
The model of sequential order statistics has been proposed for describing increasingly ordered failure times of components in technical systems where failures may have an impact on the lifetimes of remaining components. In the considered systems the lifetime distributions of surviving components are allowed to change after the occurrence of a failure. In the talk systems based on sequential order statistics with underlying distributions possessing proportional hazard rates are studied. In that case the lifetime distribution of the system can be expressed as a distorted distribution. Motivated by the distribution structure in the case of pairwise different model parameters, a particular class of distorted distributions, the generalized proportional hazard rate model, is defined and characterizations of stochastic comparisons for several stochastic orders are obtained. Moreover, related asymptotic results on aging characteristics of general distorted distributions with applications to coherent systems based on sequential order statistics are also considered. Stochastic comparisons of systems based on sequential order statistics via properties of distorted distributions
Akim ADEKPEDJOU, Sophie DABO-NIANG
This talk pertains to the modeling and analysis of recurrent event data in the presence of spatial correlation. Consider n independent units that are located in different geographical areas described by their longitude and latitude in a two dimensional surface. Existing estimators of parameters with these types of data are based on the assumption of independence of units and will fail to capture any potential spatial patterns that may exist between these areas. In this talk, we propose a new class of semiparametric models for recurrent events that can be used to identify risk factors at onset and further recurrence of events while accounting for spatial correlation. Parameters in the models are estimated using weighted estimating functions. Asymptotic properties based on increasing domains are discussed. A brief simulation study along with illustration are provided. Semiparametric estimation with spatially correlated recurrent events
|
Lifetime Data Analysis
Lifetime Data Analysis Anne C.m. THIEBAUT, Malamine GASSAMA, Jacques BENICHOU
The attributable risk (AR) measures the proportion of disease cases that can be attributed to a deleterious exposure in a population, the prevented fraction (PF) the proportion of cases that could be avoided in the presence of a protective exposure. With lifetime data, several definitions and estimation methods have been proposed for AR, none for PF. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model. We applied these methods to estimate the AR of breast cancer associated with menopausal hormone therapy and considered extensions to estimate the PF of stroke associated with lipid lowering drugs. Under proportional hazards, all methods yielded unbiased results but nonparametric methods displayed greater variability than others. Under nonproportional hazards, the semiparametric and parametric approaches performed poorly. All four methods were easily adapted to derive PF estimates for lifetime data. Overall, the use of AR and PF should be encouraged to evaluate the impact of exposure on disease risk; with lifetime data, the semiparametric or parametric approach should be used if the proportional hazards assumption seems appropriate. Estimation of attributable risk and preventive fraction in cohort studies
Pascale TUBERT-BITTER, Catherine HILL, Sylvie ESCOLANO
Self-controlled case series (SCCS) is a conditional cohort method that uses only cases to estimate incidence ratios of an adverse event and has been initially developed to assess vaccine adverse effects. In order to produce unbiased estimates, case collection must be independent of vaccination. We present SCCS extensions for analyzing cases of vaccine adverse events spontaneously reported to a surveillance database, where this assumption is not met (large amount of underreporting and variation of reporting with time since vaccination), with parametric and nonparametric assumptions to account for the specific features of the data. Performances of the proposed method were evaluated with a large simulation study. It was applied to assess the risk of intussusception after anti-rotavirus vaccination from worldwide spontaneous reports. The proposed methods are an effective way to explore and quantify vaccine safety signals from spontaneous reports. A self-controlled case series modelling approach for estimating risks of adverse events after vaccination from spontaneous reporting data
A minimax asymptotic rate of convergence for the estimation of a hazard, in a non parametric setting, is obtained, in the presence of random right censoring, using the link between the Kullback - Leibler distance of two probabilities and a weighted Lp-type distance between their corresponding hazards. Bounds for the speed of convergence of hazard rate estimation with right censoring
|
Special Session in Honor of Wenbin Wang - 2
Special Session in Honor of Wenbin Wang - 2 Hao PENG, Geert-Jan VAN HOUTUM
Due to the development of sensor technologies nowadays, condition-based maintenance (CBM) programs can be established and optimized based on the data collected through condition monitoring. The CBM activities can significantly increase the uptime of a machine. However, they should be conducted in a coordinated way with the production plan to reduce the interruptions. On the other hand, the production lot size should also be optimized by taking the CBM activities into account. Relatively fewer works have been done to investigate the impact of the CBM policy on production lot-sizing and to propose joint optimization models of both the economic manufacturing quantity (EMQ) and CBM policy. In this paper, we evaluate the average long-run cost rate of a degrading manufacturing system using renewal theory. The optimal EMQ and CBM policy can be obtained by minimizing the average long-run cost rate that includes setup cost, inventory holding cost, lost sales cost, predictive maintenance cost and corrective maintenance cost. Unlike previous works on this topic, we allow the use of continuous time and continuous state degradation processes, which broadens the application area of this model. Numerical examples are provided to illustrate the utilization of our model. Joint Optimization of Condition-Based Maintenance and Production Lot-Sizing - An Extended Abtract
Qiuzhuang SUN, Zhisheng YE
Mechanical systems usually undergo a repair (maintenance) upon failure. In reality, most maintenance actions are imperfect. To model the effectiveness of the imperfect maintenance, existing literature usually assumes that a maintenance action can reduce the failure rate or the virtual age of the system by a fixed proportion. In fact, if a failure happens to occur right after the previous one, the maintenance to restore the system is nearly minimal. Therefore, the effectiveness of repair upon failure is dependent on the time elapsed since the last repair. This paper aims to model the time-varying maintenance effectiveness and derive the corresponding optimal repair/replacement policy upon failure. A semi-Markov decision process framework is formulated to obtain the optimal maintenance policy that yields a minimum long-run average operational cost. Via the policy iteration algorithm, the optimal decision upon failure follows a monotone control limit policy. Optimal Maintenance Policy with a Time-Varying Repair Effectiveness
Khanh NGUYEN, Tuan HUYNH, Phuc DO, Christophe BERENGUER, Antoine GRALL
The quality of condition monitoring (CM) is an important factor that directly affects the effectiveness of a condition-based maintenance (CBM) program. While numerous works in the literature have considered problems related to CM quality, few of them focus on adjusting CM quality for CBM optimization. In this paper, assuming that the quality of CM information can be controlled by adjusting inspection costs, we investigate how such an adjustment can help to reduce the total cost of a CBM program. The CM quality is characterized by the variance of the system degradation information returned by an inspection. Normally, a more accurate CM information requires a more sophisticated inspection, hence a higher cost. An inspection and replacement strategy adapted to such a variance is proposed and formulated based on Partially Observable Markov Decision Processes (POMDP). The use and advantages of the proposed joint inspection and maintenance model is numerically discussed and compared to a more classical one through numerical examples. A POMDP model for the joint optimization of condition monitoring quality and replacement decisions
|
System Reliability and Maintenance Modeling - ORSJ 2
System Reliability and Maintenance Modeling - ORSJ 2 Shuhei OTA, Mitsuhiro KIMURA
In this study, we investigate the effect of dependent failure occurrence on system reliability assessment. In general, it is known that an n-component parallel system cannot deliver its designed reliability if the lifetimes of the individual components have positive dependence. On the other hand, an $n$-component series system can exceed its designed reliability under such dependent failure-occurrence environment. This research analyzes to what extent the dependence among the components worsens/improves the reliability of n-component coherent systems. The dependence among the components is modeled by FGM copula. We obtain the results by using numerical examples. Moreover, in these examples, we newly consider the allowable values of the parameters of FGM copula although they have been simply assumed $[0,1]^n$ in the literature. A study on reliability deterioration and improvement of coherent systems under dependent failure-occurrence environment
Taishin NAKAMURA, Hisashi YAMAMOTO, Xiao XIAO, Natsumi TAKAHASHI
A connected-(r,s)-out-of-(m,n):F lattice system consists of $m\times n$ components arranged as an $(m,n)$ matrix, and fails if and only if the system has an $(r,s)$ sub-matrix where all components fail. One of the most significant problems in reliability theory is the component arrangement problem (CAP) on the assumption that component reliabilities are given and components are interchangeable. The CAP is to find optimal arrangements of components to maximize the system reliability. By taking optimal arrangements into account, we can make the best use of limited resources and maximize the performance of the system.In this study, we provide necessary conditions for the optimal arrangement of the connected-(r,s)-out-of-(m,n):F lattice system with its minimal cuts overlapping, that is, $m<2r$ or $n<2s$. Since we calculate the reliability of only the systems corresponding to the arrangements satisfying the conditions, we can considerably reduce the search space for the CAP. We evaluate the performance of the proposed algorithm by performing numerical experiments. Necessary Conditions for Optimal Arrangement of Connected-(r,s)-out-of-(m,n):F Lattice System with Minimal Cuts Overlapping
Syouji NAKAMURA, Xufeng ZHAO, Toshio NAKAGAWA
In order to ensure the data security, plans of incremental and differential backups are usually set up to save the costs which are suffered from full backups. In this paper, we suppose that failures occur at data updating times and backups are implemented only at the end of data updates, and full backups are done at time T, over time T and update N to balance the costs of data backup and failure recovery. Using the theory of renewal process, we obtain the expected costs of backup and recovery and the expected cost rates for full backups. Optimum solutions of T and N to minimize the expected cost rates are obtained, and their comparisons are discussed from the optimizations of the integrated models in analytical ways. OPTIMUM BACKUP POLICIES WITH FAILURES AT RANDOM UPDATING TIMES
|
Warranty Policy Evaluation and Maintenance
Warranty Policy Evaluation and Maintenance Shaomin WU, Ming LUO
Development of warranty policy has attracted attention from researchers from both reliability and supply chain communities. Two conventionally assumed contexts in optimisation of warranty policy are reliability mathematics and game theory. In both scenarios, the causes of warranty claims are usually assumed to be hardware systems only. In reality, however, user behaviour, software failure, and hardware failure may cause warranty claims, which may be handled differently. In addition, some subsystems/components may be installed in different products. The existing research focuses on individual systems/products and little attention has been paid to the effect of the interplay of different systems and that of different subsystems. This work investigates the consequences of such interplay and reports our recent work. Optimisation of warranty policy considering the interplay of product subsystems
Zhenglin LIANG, Bin LIU
Infrastructures are subject to multiple deteriorating processes such as fatigue, creep and crack. Gamma process excels in modeling such types of deterioration processes as it is capable of modeling the accumulated damage over time. In practice, two types of maintenance policies, namely time-based preventive maintenance and condition-based maintenance have been widely implemented to prevent the assets from failure. This paper aims to develop a selecting method between the two maintenance policies for assets with gamma deteriorating process. An index "Effectiveness of Preventive Maintenance" is proposed for maintenance policy selection. Selection of maintenance policies for infrastructures with gamma deteriorating process
Nan ZHANG, Mitra FOULADIRAD, Anne BARROS
This paper analyses the expected warranty costs from the perspectives of the
manufacturer and the consumer respectively. Both the non-renewing free re-
placement policy and the renewing replacement policy are examined regarding
a two-component in series system with failure interaction between components.
Our primary objective is to provide explicit expressions of the warranty costs
allocations between the manufacturer and the consumer by taking into account
the product service time. Numerical examples are given to demonstrate the
applicability of the methodology. It is shown that, independent of the type of
the warranty policy, the failure interaction between components has impact on
the manufacturer profits and the consumer costs. WARRANTY COST ANALYSIS OF A TWO-COMPONENT SYSTEM WITH STOCHASTIC DEPENDENCE
|
Reliability and Optimization in Shock Models
Reliability and Optimization in Shock Models Ji Hwan CHA, Maxim FINKELSTEIN
Stochastic failure models for systems under randomly variable environment (dynamic environment) are often described using hazard rate process. In this paper, we study a reliability model described by a hazard rate process induced by external shocks affecting a system that follow the nonhomogeneous Poisson process. We derive and study the ‘conditional properties’ of the stochastic failure model. Based on these properties, we analyze and interpret the shape of the failure rate in terms of population dynamics. ON POPULATION DYNAMICS IN A SHOCK MODEL
Javier RIASCOS-OCHOA, Mauricio SáNCHEZ-SILVA
Reliability estimation of systems under shock-based deterioration presents three main challenges: first, finding appropriate (and enough general) probabilistic models for the random variables such as the inter-arrival times $X_i$ and shock sizes $Y_i$, that reproduce different deterioration trends; secondly, fitting the parameters and variables of these models to real data; and third, even with accurate probabilistic models, the reliability estimation is numerically intractable due to the convolutions. This paper presents a model for cumulative-shock deterioration based on the formalism of Phase-type (PH) distributions. By supposing the inter-arrival times and shock sizes as independent and following (not necessarily identical) PH distributions, the previous problems can be easily approximated. This is possible by means of two useful properties: PH distributions can fit any dataset or distribution (with positive support), and their matrix-geometric properties allow easy-to-compute expressions for the reliability quantities. Moreover by relaxing the identical distribution assumption, different deterioration trends can be obtained, which reproduce different deterioration processes in practice. Illustrative examples demonstrate the convenience of the proposed model. Reliability and data fitting of shock-based deterioration models: applications of Phase-Type distributions
Femin YALCIN
In this paper, a generalized class of shock models is defined and studied. The lifetime of the system under this class is defied through a compound random variable T, sum from 1 to N of Yt, where N denotes the number of shocks that cause failure of the system, and Y1,Y2,... represent the times between arrivals of successive shocks. The reliability properties corresponding to T are investigated when N has a phase-type distribution and depends on Y1,Y2,.... Reliability properties of a generalized class of shock models
|
15h30-16h00 | Coffee break | |||||
16h00-17h30 |
ISBA/IS
ISBA/IS Majid ASADI, Nader EBRAHIMI, Ehsan SOOFI
Recently we have shown that the proportional hazards (PH) model provides a medium for connecting the Gini coefficient, Fisher information, and Shannon entropy. This presentation gives an overview of some of the results given in [1], new reliability application examples, and a new result. The applications are the Bayes risk of the mean residual life of the system at the system level and computation of bounds for the Bayes risks of the PH models with the gamma and mixture of exponential baseline distributions. The result is for ordering Bayes risk of the mean residual life under PH models. Bayes, Gini, Fisher, and Shannon under proportional hazards
David RIOS INSUA, Fabrizio RUGGERI, Refik SOYER, Daniel GARCIA RASINES
There are problems in reliability analysis and life testing that may involve two or more actors with competing interests. These problems with adversarial components can be set-up as games and can be solved using game theory methods. In this paper, we present an alternative approach based on adversarial risk analysis framework to deal with such problems. We illustrate the framework through acceptance sampling and life testing problems. Adversarial Issues in Life Testing
Nigel CLAY
Discrete phase-type (DPH) distributions describe the distribution of stopping times on a Makov chain with an absorbing state. Estimation of the parameters of these distributions has been limited to approaches which produce point estimates. In this paper it is shown how a Bayesian framework can be applied to the problem. Three different forms of the parameter space are discussed. In particuar a DPH form with a unique, minimal representation is shown to have computational advantages. Bayesian Estimation of Discrete Phase-Type Distributions
|
Resilience Modeling
Resilience Modeling Nazanin MORSHEDLOU, Kash BARKER, Giovanni SANSAVINI
We propose a multi-objective vulnerability/recoverability formulation that aims to (i) strengthen the ability of a network to withstand a disruptive event, and (ii) enhance the ability of the network to recovery timely. In the first objective function, an adaptive capacity formulation is introduced to allocate limited fortification resources to disrupted network components to reduce vulnerability in the short-term after the occurrence of a disruptive event. In the second objective function, a restorative capacity formulation assigns different work crews with dynamic recovery rates to disrupted components to minimize network recovery time. A goal of this work is to understand the tradeoff between pre-disruption investments in adaptive capacity and post-disruption investments in restorative capacity. The formulation is illustrated on a 400kV French transmission network case study. TRADING OFF VULNERABILITY AND RECOVERABILITY IN NETWORK RESILIENCE
Dante GAMA DESSAVRE, Andrea GARCIA TAPIA, Jose E. RAMIREZ-MARQUEZ
Resilience is generally understood as the ability of an entity to recover from an external disruptive event. Community resilience in particular refers to the abilities of communities to utilize resources to withstand disruptive situations. Yet, there is a big challenge when analyzing big communities, since they vulner- able to multiple kinds of events added to the amount of actors and resources. The number of resources, strategies and potential disruptions grows (at least) exponentially when the size and number of the communities. This presents a big challenge to be able to measure and visualize community resilience.
The objective of this work is to combine community resilience metrics and network models with interactive visualization to estimate overall community resilience in an intuitive and approachable manner for decision makers. To illustrate and study the problem, simulated network problems are used.
In addition to the visualization tools and resilience, parallel computing technolo- gies were utilized in order to study the computational complexity of community resilience estimation. Network Tools For Big Community Resilience Visualization
Xu ZHANG, Liping SUN, Haitao LIAO, Edward POHL
Condition monitoring (CM) via multiple sensors has been widely implemented to ensure the reliability and safety of engineering systems. However, an ironical issue commonly faced by engineers is that the sensors in such CM systems may fail before the system to be monitored because of either reliability issues or harmful attacks. Upon the loss of sensor signal(s), a task is to determine whether or not it is due to sensor failures or to the failure of the system being monitored. The way to enhance the dependability of such a CM system is to enable its resilience capability. This study focuses on the use of wavelet analysis and multivariate time series to identify sensor failures and enable signal prediction based on the different levels of correlation among multiple sensor signals. The goal is to provide an effective method that balances the response time and the accuracy and precision of signal recovery. The methodology is illustrated using a case study on hawser CM in a Floating Production, Storage and Offloading (FPSO) system. IMPROVING RESILIENCE CAPABILITY OF A MULTICHANNEL CONDITION MONITORING SYSTEM SUBJECT TO PARTIAL FAILURES
|
Industrial Applications of MMR
Industrial Applications of MMR Vitali VOLOVOI
Redundancy is commonly used as an effective means to ensure the high dependability of engineering systems. This leads to a trade-off related to the timing of replacing failed components: faster replacement reduces the risk of system failure but is generally more expensive. This paper discusses several approaches to assessing the risks of system failure in the presence of deferred maintenance. The relationships among several existing models are examined, with a focus on the commonality and limitations of those models. In addition to Markov models, the impact of deviating from those assumptions is studied for two applications with the fixed time repair delays. Deferred Maintenance of Redundant Systems
Pierre DERSIN, Benjamin BONNET, René VALENZUELA, Michele PUGNALONI
Reliability prediction, estimation and demonstration are increasingly important in industry and still represent a real challenge. Inaccurate prediction of product reliability entails adverse consequences in terms of profitability and competitiveness. The purpose of this paper is to share Alstom’s experience in this area. Specifically, the following is addressed: comparison of various reliability handbooks; use of ALT; design decisions aiming at influencing mission profile; statistical analysis of field data to estimate reliability dependence toward key influencing factors. And, finally, a heuristic method for making field reliability demonstration tests more user-friendly. Some Reliability Challenges in the Rail Industry: Case Studies and Lessons learned
Vasiliy KRIVTSOV, Michael FRANKSTEIN
This paper is a sequel to our report at MMR-2004, wherein we proposed a simple procedure to construct the joint prior and posterior distributions of Weibull parameters based on the underlying reliability function estimates in two time cross–sections. The procedure turned out to be quite useful in practical applications. As of 2017, it has been cited in over 70 publications and was chosen to be implemented in the 2013 release of the JMP® reliability analysis software package by the SAS Institute. In this paper, we extend the procedure in three aspects: a) the prior data can now be taken in terms of a simple probability paper plot, b) the posterior now includes not only posterior estimates of distribution parameters, but also the posterior estimate of the underlying reliability function along with the respective credibility intervals, and c) we show that the proposed procedures can be applied to any parametric lifetime distribution, not necessarily limited to the location–scale family. A bayesian estimation procedure for lifetime distributions with automotive applications
|
Maintenance Policies
Maintenance Policies Ho Si Hung NGUYEN, Phuc DO, Benoit IUNG, Hai-Canh VU
This paper presents a dynamic grouping maintenance strategy for geographically dispersed production systems (GDPS). Economic dependence at both component and site levels is investigated and integrated in the grouping model. The grouping and routing optimization become then a NP-hard problem which is herein solved by the joint implementation of Genetic algorithm (GA) and Traveling Salesman Problem (TSP) approach. The uses and advantages of the proposed grouping maintenance approach is illustrated through a numerical example of a GDPS consisting of 12 components located in four different sites. A dynamic grouping maintenance strategy for geographically dispersed production systems
M.d. BERRADE, P.a. SCARF, C.a.v. CAVALCANTE
We analyze the effects when replacement of a component is not executed immediately upon discovery of a defect. Different reasons can lead in practice to this policy: preparing an effective replacement, avoiding production disruption, taking advantage of a new model that is about to substitute the current in use; or an intended maintenance of other components. We explore conditions that make postponement a cost-optimum strategy. A MODEL FOR POSTPONING REPLACEMENT IN MAINTENANCE
Shaomin WU
Development of models for the failure process of a repairable system has been an interesting research topic for decades. There have been many models developed in the literature. However, more research is still needed to deal with various difficulties raised in the practical applications, which includes the cases that most real systems are composed of more than one component and that the failure process may not be stochastically monotone. For such cases, we have recently developed three models, which are briefly discussed in this paper. Three models for the failure process of a repairable system
|
Likelihood-based Methods in Lifetime Data
Likelihood-based Methods in Lifetime Data Hideki NAGATSUKA, N. BALAKRISHNAN
The generalized Pareto distribution (GPD), named by Pickands in 1975, is a limiting distribution for exceedances over thresholds, and it offers a unifying approach for modeling the tails of various distributions. It is used in various applied areas such as hydrology, reliability, life science, material science and quality control. The estimation of the parameters of the GPD poses a challenging problem, and the existing methods have some theoretical and/or computational difficulties. In this talk, we will introduce an estimation method for the GPD, proposed by Nagatsuka and Balakrishnan (2017, submitted paper), Under mild regularity conditions, this method provides estimates which always exist uniquely and possess consistency over the entire parameter space. Finally, we will illustrate the proposed method with some real data sets. Likelihood-based inference for the generalized Pareto distribution and its applications
Debasis KUNDU, Debanjan MITRA, Ayon GANGULY
Analysis of left truncated and right censored data is considered under the framework of competing risks. The latent failure times model is assumed. The results are developed assuming there are two competing causes, but all the results can be easily extended to cases with multiple causes. The lifetimes corresponding to the competing causes of failures are assumed to follow Weibull distributions. Maximum likelihood estimates for the model parameters are obtained. Confidence intervals for the model parameters are obtained through parametric bootstrap approaches. The Bayes estimates and the associated credible intervals of unknown parameters are also obtained. Through an extensive Monte Carlo simulation study, the performances of the proposed methods are assessed. A data analysis is performed to illustrate the proposed methods. Likelihood and Bayesian Inference for Left Truncated and Right Censored Competing Risks Data
Suvra PAL, Jacob MAJAKWARA, N. BALAKRISHNAN
In this talk, I will consider the destructive COM-Poisson regression cure rate model. This model assumes the initial risk factors in a competitive scenario to undergo a destructive process so that what is recorded is only from the undamaged portion of the original number of risks factors. By assuming a COM-Poisson distribution for the initial risk factors, I will discuss the steps of the EM algorithm to determine the MLEs of the model parameters. Next, I will present the results of an extensive simulation study to demonstrate the performance of the proposed estimation method. The flexibility of the COM-Poisson family will be utilized to carry out a model discrimination using the likelihood ratio test. Finally, I will analyze a real melanoma data for illustrative purpose Destructive COM-Poisson Cure Rate Model and Associated Likelihood Inference
|
Degradation Based Reliability Modelling and Maintenance Decision Making
Degradation Based Reliability Modelling and Maintenance Decision Making Songhua HAO, Jun YANG
Many systems are often subject to degradation and random shocks simultaneously, and their failure is the competing result of soft and hard failures. In practice, these two competing failure processes can have some sort of dependence. Based on the non-cumulative shock model, and considering the impact of random shocks on degradation performance as well as degradation rates, this paper proposes a new reliability model for systems subject to dependent competing failure processes. Random shocks in our model are divided into three different regions according to their magnitude. Shocks with small magnitude do no harm to the system, and shocks with large magnitude cause hard failures directly. Shocks with intermediate magnitude affect the system degradation. The system reliability model is developed through analytical derivation and numerical calculation methods. Finally, a numerical example of Micro-Electro-Mechanical System (MEMS) is conducted to illustrate the implementation of the proposed model. RELIABILITY ANALYSIS FOR COMPETING FAILURE PROCESSES WITH CHANGING DEGRADATION RATES DEPENDENT ON RANDOM SHOCKS
Wenjin ZHU, Shuai ZHANG , Shubin SI
This paper studies the maintenance policy of a load-sharing k-out-of-n:F sys-
tem. The failure of a component leads to an increase of load on the survival
components. A periodic inspection and preventive maintenance policy and a
dynamic maintenance based on reliability threshold are combined to reduce the
maintenance and failure cost. The components degrade due to random load
and are subject to two kinds of failure: random failure and physical failure.
The former kind of failure is restored to functional state by minimal repair,
while the later can only be solved by corrective maintenance. The component
failure is self-announcing but its degradation state should be revealed by in-
spection. The decision variables are periodic interval $\tau$ and reliability threshold
$\gamma$. The effect of minimal repair on the system cost is analyzed. MAINTENANCE STRATEGY FOR A DETERIORATING SYSTEM WITH RANDOM LOADS
Han WANG, Yu ZHAO, Xiaobing MA
Stochastic degradation models are becoming more and more popular in degradation analysis. Usually they are selected based on statistical principles or past experience, which may cause misspecification when investigating a newly designed product. In this paper, a new approach for stochastic degradation model selection is proposed by considering the variation of mechanism. Firstly the inner relationship between degradation mechanism equivalence and model parameters is considered based on acceleration factor constant principle. Then taking Wiener, gamma and inverse Gaussian (IG) processes as examples respectively, we investigated the necessary conditions which should be satisfied under different stress levels. Finally we define the mechanism equivalence factor for each model and measure its fluctuation by using the coefficient of variation (CV). In this way, different degradation models are compared and the optimal model yields the smallest CV. The stress relaxation data collected from an electronic device are used to illustrate the method. A New Approach for Stochastic Degradation Model Selection Using the Variation of Mechanism
|
18h00-23h00 | Conference dinner - Château du Touvet |
9h00-10h30 |
Dynamic Reliability Models and Statistical Inference
Dynamic Reliability Models and Statistical Inference Bo Henry LINDQVIST, Odd Eirik FARESTVEIT
Models for degradation and maintenance of items based on first passage times of stochastic processes have proven useful in diverse applications such as production machines or pipelines. While the gamma process seems to be the preferred process in applications, we will here consider the inverse Gaussian process. As a concrete example, we consider items that are inspected and maintained at sequentially determined random times, with degradation modeled by increasing stochastic processes. Two threshold levels will be considered for the deterioration process, a lower one corresponding to a degraded state, for which a preventive maintenance action may be performed, and a higher one corresponding to the failure state at which the item is replaced. A simulation based algorithm is developed to calculate long-run expected costs. An interesting part of the algorithm involves bridge sampling from the degradation process. Degradation and maintenance modeling using the inverse Gaussian process
Jean-Yves DAUXOIS, Soufiane GASMI, Olivier GAUDOIN
The aim of this paper is to introduce and study a new model of imperfect maintenance in reliability. A model of geometric reduction of intensity is assumed on the inter-arrival times of failures on a system subject to recurrent failures. Based on observation of several series of failures and imperfect maintenances on a single system, we introduce estimators of the parameters (euclidean and functional) of this semiparametric model and we prove their asymptotical normality. Then a simulation study is carried out to learn the behavior of these estimators on samples of small or moderate size. We end this work with an application on a real dataset.
Statistical Inference in a model of Imperfect Maintenance with Geometric Reduction of Intensity
Eric BEUTNER, Laurent BORDES, Laurent DOYEN
Virtual age models are very useful to analyse recurrent events data. Among the strengths of these models is their ability to account for the effects of an intervention or treatment just after an event occurrence. So far it has been assumed in the nonparametric or semiparametric setting that the virtual age function is known. One way to overcome this is to consider semiparametric virtual age models with parametrically specified virtual age functions. Yet, fitting semiparametric virtual age models with parametrically specified virtual age functions is a difficult task. In this talk we show that consistent estimators can be constructed by smoothing the profile log-likelihood function appropriately. We show that our general result can be applied to most of the relevant virtual age models of the literature. Our approach also shows that empirical process techniques may be a worthwhile alternative to martingale methods for studying asymptotic properties of these inference methods. A short simulation study is provided to illustrate our consistency results together with an application to real data.
Semi-parametric inference for effective age models when the age function is specified parametrically
|
Signatures - 2
Signatures - 2 Jorge NAVARRO
The study of stochastic comparisons of systems with different structures is a relevant topic in reliability theory. Here we study distribution-free comparisons, that is, orderings which do not depends on the component distributions. We consider different assumptions for the component lifetimes and we use different
comparisons techniques accordingly. Thus, if they are independent and identically distributed (IID) or exchangeable, the orderings are obtained by using signatures. If the are just ID (homogeneous components), then we use ordering results for distorted distributions. In the general case or in the case of independent
components (heterogeneous components), we use a similar technique for generalized distorted distributions. In these cases, the ordering results may depend on the copula used to model the dependence between the component lifetimes. Some illustrative examples are included in each case. On comparing coherent systems with homogeneous and heterogeneous components
Narayanaswamy BALAKRISHNAN, William VOLTERMAN
Based on the observed lifetimes of two systems with shared components, we construct exact nonparametric confidence intervals for quantiles of component and system lifetimes using the minimum and maximum lifetimes, as well as by using all component lifetimes. The coverage probabilities and expected widths of these confidence intervals are then compared for several signature matrices and sample sizes. Exact Nonparametric Inference for Component and System Lifetime Distributions Based on Joint Signatures
Hon Keung Tony NG, Yangdan YANG, Narayanaswamy BALAKRISHNAN
In system reliability engineering, systems are made up of different components and these systems can be complex. For various purposes, engineers and researchers are often interested in the lifetime distribution of the system as well as the lifetime distribution of the components which make up the system. In many cases, the lifetimes of an n-component coherent system can be observed, but not the lifetimes of the components. In the recent years, parametric and nonparametric inference for the lifetime distribution of components based on system lifetime lifetimes has been developed. We further investigate the estimation of the parameters in component lifetime distributions based on censored system-level data. Specifically, we consider the maximum likelihood estimation and propose alternative computational methods and approximations to the maximum likelihood estimators. Based on the special features of the system lifetime data, we treat the system lifetime data as incomplete data and apply the Expectation-Maximization (EM) algorithm to obtain the maximum likelihood estimators (MLEs) and apply the stochastic EM (SEM) algorithm to approximate the MLEs. Different implementations of the EM and SEM algorithms are proposed and their performances are evaluated. We have shown that the proposed methods are feasible and easy to implement for various families of component lifetime distributions. Computational Algorithms for the Analysis of System Lifetime Data
|
Networks/Clouds - Resilience, Maintenance and Security
Networks/Clouds - Resilience, Maintenance and Security Huadong MO, Giovanni SANSAVINI
The survivability of microgrids after cyber-attacks has received attention due to the pervasive use of open communication networks. Cyber-attacks can incapacitate the communication between physical components and control centers, and lead to unstable system frequency. Current works do not represent the uncertainty of the attacker behavior and the performance of the remaining physical components in delivering the required functionality. In this work, we use the outcome of a contest between the attacker and defender to quantify the cyber components unavailability, and a two-stage game to model the microgrid survivability. We employ the data-driven distribution of the peak-demand time to represent the uncertain attack behavior. The microgrid fails if its frequency cannot satisfy the requirements, even though the remaining capacity can meet the demand. Case studies investigate optimal defensive strategies in terms of the trade-off between protection allocation to cyber components and redundancy allocation to physical units, which enhance the microgrid ability in providing required load frequency control performance. The defender distributes more resource on the protection to reduce the cyber component’s unavailability if the uncertainty about the most probable attack time is small and the contest intensity is large. OPTIMAL RESOURCE ALLOCATION FOR MICROGRID SURVIVABILITY AGAINST CYBER-ATTACKS
Gregory LEVITIN, Liudong XING
This paper models a situation when a user partitions and distributes sensitive data among several virtual machines to make unauthorized access to the entire data difficult in a cloud environment subject to the co-resident attacks. The attacker creates virtual machines in the same environment aiming to get access to users’ data. The cloud resource management system distributes all virtual machines among servers at random. The unauthorized access to data associated with user's virtual machine is possible only if this machine co-resides in the same server with the attacker's virtual machines. The arrival of attacker’s requests for creating virtual machines is modelled by Poisson stochastic process. Probabilistic model is suggested to obtain dynamic data security index. DYNAMIC DATA SECURITY UNDER CO-RESIDENCE ATTACKS IN CLOUD COMPUTING SYSTEMS
Chi ZHANG, Wnahsn LI, Liuquan LI
Networked critical infrastructures, such as electricity, telecommunication and transportation, are critical for both economic development and social wellbeing of modern societies. In order to ensure their continued effective performance, cost-effective maintenance is necessary. Existing studies on selective maintenance are generally restricted to systems with relatively simple structures, such as series and parallel. Moreover, they need the system to be entirely shut down so as to conduct the maintenance actions. However, current critical infrastructures usually consist of complex topologies and needs to be functioning continuously without any interruption. To deal with this problem, this research proposes a new selective maintenance approach that can help determine optimal selective maintenance strategies for critical infrastructures with general structure. Our approach can also ensure the infrastructure’s performance able to satisfy customers’ demand continuously even during the process of maintenance. Optimizing selective maintenance for networked infrastructure systems
|
Warranty and Maintenance Modeling with Applications - ORSJ 3
Warranty and Maintenance Modeling with Applications - ORSJ 3 Tomohiro KITAGAWA, Tetsushi YUGE, Shigeru YANAGI
A maintenance model for a system equipped on ship taking a voyage with a random duration is proposed. When a failure occurs, one action is chosen from three, return to the base, instantaneous repair on-site and to leave it alone and repair it after the end of the voyage. Our goal is to determine the optimal action depending on the occurrence time of failures, where the optimal policy minimizes the expected cost until the completion of one voyage, ensuring a certain mean availability. THREE REPAIR OPTIONS DEPENDING ON FAILURE TIME FOR A SYSTEM EQUIPPED ON SHIP
Nobuyuki TAMURA
We consider a multi-state system whose deterioration is modeled as a semi-Markov Process with an absorbing state. The system can suffer major and minor failures. When the system
reaches the absorbing state, major failure occurs and the system is replaced. Meanwhile, minor failure can occur at each state. Upon the occurrence of minor failure, the system is minimally
repaired. For the system, we propose a state-age-dependent replacement policy which minimizes the expected long-run cost rate. We investigate structural properties of the optimal
replacement policy and show that a control-limit policy holds under several assumptions. State-age-dependent replacement policy for a semi-Markovian deteriorating system with majaor and minor failures
Richard ARNOLD, Stefanka CHUKOVA, Yu HAYAKAWA
We present a model for the delayed reporting of faults: multiple non-fatal faults are accumulated and then simultaneously reported and repaired. The reporting process is modelled as a stochastic process dependent on the underlying stochastic process generating the faults. The joint distribution of the reporting times and numbers of reported faults is derived. We will also present a few extensions of the above model, which deal with multiple fault types, planned preventative maintenance and customer rush. Delayed reporting of faults in warranty claims
|
Structural Reliability - IMdR 2
Structural Reliability - IMdR 2 F. A. DIAZ DE LA O
A recent formulation that connects the structural reliability problem and Bayesian updating has been established. This opens up the possibility of efficient model updating using Subset Simulation. The formulation, called ``Bayesian Updating with Structural reliability methods'' requires the prudent choice of a multiplier, which has remained an open question. Motivated by this problem, this paper discusses a revised formulation that allows Subset Simulation to be used for Bayesian updating without having to choose a multiplier in advance. Bayesian Inference with Structural Reliability Methods
Nazih BENOUMECHIARA, Bertrand MICHEL, Philippe SAINT-PIERRE, Roman SUEUR
In a structural reliability problem, the random input parameters are described by a probabilistic model generally obtained from expert feedbacks or data from experimental tests. However, in numerous cases, this model is incomplete. Indeed, some information may be unavailable or too costly to obtain, especially for the dependence structure between the input variables. The most common industrial practice consists in assuming the independence of the inputs for the pairs of variables for which the dependence structure is unknown. This solution can lead to an overly optimistic evaluation of the risk. Therefore, in order to guarantee the conservatism of the method, we suggest to explore a set of dependency scenarios in order to determine the most penalizing structure. This approach leads to a more pessimistic estimate of margins, but is also more robust with respect to regulatory criteria. Structural Reliability With Incomplete Dependence Structure
Guillaume CAUSSE, Thierry YALAMAS
When premature breaks of mechanical components are detected on an in-service system, the actions to be implemented to redesign the component are often expensive and difficult to define precisely. With help of a coupling between numerical simulation methods (Finite Element Analysis with ANSYS) and well-known probabilistic methods for sensitivity and reliability analysis (available in PhimecaSoft), PHIMECA define the most efficient way to redesign a railway component under measured vibratory fatigue loading. First, the transient analysis that calculates the stress level evolution is developed and validated. Then, the probabilistic distribution of each component parameter (dimensions, material characteristics (E, Re, Rm), soil thickness), is defined by railway standard and expert judgment. A global sensitivity analysis using a surrogate model (Polynomial Chaos) based on a global design of experiment is performed. It allows to rank the influences of each variable on the spread of damage. The reliability is too estimated by using simulation methods on the surrogate model for different damage limit-states associated to different lifetimes. The damage mean and standard-deviation can be evaluated and it is then possible to improve the performance in the best way (regarding cost and lifetime improvement) by modifying the most sensible parameters mean value or standard-deviation. SENSITIVITY AND RELIABILITY ANALYSIS FOR THE REDESIGN OF MECHANICAL RAILWAY COMPONENT
| |
10h30-11h00 | Coffee break | |||||
11h00-12h30 |
Recent Advances in Failure Time Data Modeling with Partial Observations
Recent Advances in Failure Time Data Modeling with Partial Observations Beidi QIANG, Edsel PENA
For a coherent reliability system composed of several components configured according to some structure function, its time-to-failure or system life distribution is usually of interest. This distribution may be estimated by simply using observations of the system lifetimes. However, if component lifetime data is available, a better estimator of the system failure time may be obtained. In this work, we demonstrate that shrinkage ideas may further be exploited to gain more efficiency in the estimation of the system reliability function even under nonparametric assumptions about the component time-to-failure distributions. Improved Estimation of System Reliability in a Nonparametric Setting via Shrinkage
Pierre JOLY
Event history analysis can be particularly complex due to competing risks and interval-censored and
clustered event times.
Multi-state model allows individuals to move from several states and to take into account competing risks.
Regression models like proportional transition intensities models are used
to assess the impact of individual factors on each transition.
For inference, right and interval censoring as well as left-truncation have to be dealt with.
We incorporate random effects for dealing with group-specific factors.
To illustrate this, two examples are presented. The aim of the first one is to
study risk factors of dementia from cohort data and to estimate life expectancies.
Subjects are ``grouped'' because their are living in 75 municipalities of South western France.
The aim of the second example is to compare the life expectancy of a filling in a primary
tooth between two types of treatments. Several fillings can be placed in the same mouth, possibly by the same
dentist implying a hierarchical cluster structure. Examples of multi-state Models for interval-censored and clustered data
Shanshan LI, Yifei SUN, Chiung-Yu HUANG, Dean FOLLMANN, Richard KRAUSE
Although recurrent event data analysis is a rapidly evolving area of research, rigorous studies on modeling and estimation of the effects of time-varying covariates on the risk of recurrent events have been lacking. Existing methods for analyzing recurrent event data usually require that the covariate processes are observed throughout the entire follow-up period. However, covariates are often observed periodically rather than continuously. We propose a novel semiparametric estimator for the regression parameters in the popular proportional rate model. The proposed estimator is based on an estimated score function where we kernel smooth the mean covariate process. We show that the proposed semiparametric estimator is asymptotically unbiased, normally distributed and derive the asymptotic variance. Simulation studies are conducted to compare the performance of the proposed estimator and the simple methods carrying forward the last covariates. Recurrent Event Data Analysis With Intermittently Observed Time-Varying Covariates
|
Statistical Analysis of Dependent Random Variables
Statistical Analysis of Dependent Random Variables Patryk MIZIUłA, Jorge NAVARRO
We consider coherent systems with dependent heterogeneous components, in particular with stochastically ordered component lifetimes. We present lower and upper bounds on the reliability function and expected lifetime of such systems. The bounds on the system reliability function and expected system lifetime are expressed in the units of the mean of component reliability functions and the mean of component expected lifetimes, respectively. Bounds on the reliability function and expected lifetime of coherent systems with dependent heterogeneous components
J. V. DESHPANDE, Isha DEWAN, K. F. K. F. LAM, Uttara NAIK-NIMBALKAR
Let $X$ and $Y$ be two random variables with marginal distributions $G$ and $H$, respectively and $X \le Y$ $ a.s.$. Let $\Psi$ be a function such that $\Psi(G)$ is again a distribution function.
In this article, we propose two tests, one of Kolmogorov-Smirnov type and the other of Wilcoxon type, for the null hypothesis $\Psi(G) = H$ against the
alternative $ H > \Psi(G)$. The tests are based on the empirical distribution functions of the observations on $X$ and $Y,$ which are dependent.
We obtain their asymptotic null distributions. A relationship between the distribution functions of two dependent outcomes can be specified as a hypothesis to be tested in examples like the load sharing models, record values and auction bidding models. As an application, we consider in detail the problem of testing the effect of load sharing in two component parallel systems.
Tests for specific nonparametric relations between two distribution functions with applications
Isha DEWAN
The lifetimes of interest need not be independent and identically distributed random variables.
We consider a sequence of associated random variables with a common marginal distribution function $F(x)$.
Exponential distribution is synonymous with 'no ageing' for lifetimes. As opposed to this units may age with time
- which is the concept of positive ageing. Some common concepts of positive ageing are Increasing Failure Rate (IFR),
Increasing Failure Rate Average (IFRA) and New Better than Used (NBU). We propose tests based on U-statistics to test
for exponentiality against IFRA and NBU alternatives when the lifetimes of interest are associated. ON TESTS FOR AGING FOR ASSOCIATED RANDOM VARIABLES
|
Deterioration Models
Deterioration Models Massimiliano GIORGIO, Agostino MELE, Gianpaolo PULCINI
In this paper a new noisy gamma degradation process is proposed where the noisy measurement is modelled as a non-gaussian random variable that depends stochastically on the hidden degradation level. The main features of proposed model are discussed. The expression of the likelihood function for a generic set of noisy degradation measurements is derived. The residual reliability of a degrading unit that fails when its degradation level exceeds a given threshold limit is formulated. A particle filter method is suggested that allows computing in a quick yet efficient manner the likelihood function and the residual reliability. An applicative example is also illustrated, where the parameters of the (hidden) gamma process and the residual reliability of the degrading units are estimated from a set of noisy degradation data by using the maximum likelihood method. A NOISY GAMMA DEGRADATION PROCESS WITH DEGRADATION DEPENDENT NON-GAUSSIAN MEASUREMENT ERROR
Mario LEFEBVRE
Let X(t) denote the wear of a machine and Y(t) the number of items it produces per unit time. A system of stochastic differential equations is considered for the vector process (X(t),Y(t)). The aim is to find the average time it takes the two-dimensional diffusion process to hit a certain boundary for the first time. The appropriate Kolmogorov backward equation is solved explicitly by making use of the method of similarity solutions. Mean of a first-passage time for a two-dimensional diffusion process
Zeina AL MASRY, Sophie MERCIER, Ghislain VERDIER
A standard gamma process is widely used for cumulative deterioration modeling purpose and to make prediction over the system future behavior. However, this process may be restrictive within an applicative context since its variance-to-mean ratio is constant over time. Extended gamma process, which was introduced by Cinlar (1980), seems to be a good candidate to overcome the latter restriction. The aim of this paper is to investigate benefits of using an extended gamma process for modeling the system evolution instead of a standard gamma process, from a reliability point of view. With that goal, we propose a condition-based dynamic maintenance policy and evaluate its performance on a finite planning horizon. Numerical experiments are illustrated based on simulated data. A condition-based dynamic maintenance policy for an extended gamma process
|
Maintenance Modelling and its Indexes
Maintenance Modelling and its Indexes Qingan QIU, Lirong CUI
The availability and optimal maintenance policies of a competing-risk system undergoing periodic inspections are studied in this paper. Specifically, a repairable system with a working state and M failure modes is considered. Each failure mode has a random failure time. When the system fails from the ith(i=1,2,…,M) failure mode, corresponding corrective replacement (CR) is performed which takes a random time Yi. Some analytical results on the instantaneous availability and the steady-state availability for the system are derived. The model is then utilized to obtain the optimal maintenance inspection interval that maximizes the system steady-state availability or minimizes the average long-run cost rate. Availability and maintenance modelling for systems subject to multiple failure modes
Jinting WANG, Zhuang ZHOU
This paper deals with optimal maintenance service contracts for warranty products among three parts: manufacturer, agents and customers. The service-repair process of the products is formulated as a queueing problem and the waiting times of failed products can be derived. The penalty cost in return for delaying the repair of the failed product is considered. Customers aim to find an optimal time interval between the preventive maintenance and replacement during the warranty period. The interaction among three parts is formulated as a game problem which is used to solve the optimal decision problem. We assume that the relationship between the agents and the manufacturer is non-cooperative. They both provide the same maintenance service for the warranted products. The Nash equilibrium warranty price structure is investigated from customer's perspective. Under this equilibrium price, we obtain the optimal number of customers to be served from the manufacturer's viewpoint. Optimal Decisions in a Three-level Maintenance Service Contract with Strategic Customers
Lirong CUI
In real world, most systems or products can be repaired or must be maintained routinely, so that the study on repairable systems in reliability field has been an interesting and hot topic. There has been much literature on this topic, for example, Limnios and Oprisan (2000), Zheng et al. (2006), Cui (2008), Bao and Cui (2010), Cui et al. (2013) and Yi et al. (2017) and others. In general, a repairable system consists of two parts: mission operating part and repair auxiliary part. As the complexity of systems become more and more high, and we need to understand the systems deeply and more detail, so that many maintenance indexes must be needed to satisfy these demands, the researches on this direction haven appeared more recently.
The repairable systems have many performances which people are interested in, so that the related maintenance indexes have been invented more and more, the motivation on inventing new maintenance indexes results not only from theoretical study but also from applications. On the other hand, the detailed and deep descriptions on the repairable systems or products need the related indexes.
In this talk, the recent developments on maintenance indexes are reviewed. The maintenance indexes can be classified into two categories: (1) deterministic ones which are expressed as deterministic function or values and, (2) random ones which are expressed as random variables or stochastic processes. In general, the deterministic maintenance indexes are popular or preferred, specially in applications, because of their simplicity and measurement. However, the random maintenance indexes may have some advantages in theoretical researches. The maintenance indexes may be for the whole repairable systems and may also for mission operating part and repair auxiliary part, respectively. The maintenance indexes contain mainly the following ones: (1) Availability; (2) Stability; (3) Failure frequency; (4) Up and down time; (5) Busy probability for repair auxiliary part; (6) Failure number; (7) Failure intensity; (8) Virtual age. The most researches have been focused on availability which can be classified based on the known results into: (1) point availability; (2)steady-state availability; (3) interval availability; (4) mixture of point and interval availability; (5) average availability and so on.
In this talk, the summary shall be done based on recent works appeared in Journal of Applied probability, IEEE Transactions on Reliability, Reliability Engineering and System Safety, Annals of Operations Research, IIE Transactions, Methodology and Computing in Applied Probability, Quality Technology and Quantitative Management, and other journals and chapters in monographs. The talk will be presented by classifications, for example, for availability including definitions, formulas, relations under Markov process environment.
(1). Instantaneous availability, steady-state availability;
(2). Point availability, interval availability, mixture availability (single, multiple);
(3). Deterministic availability, random availability;
(4). Availabilities for mission operating part and repair auxiliary part.
The other maintenance indexes shall be discussed in the talk as well. Finally, the trends and some future work in maintenance indexes are discussed. The main references are as follows. Barlow and Proschan (1965), Csenki (1994, 1995, 2007), Sericola (1990, 1994), Rubino and Sericola (1992),
Finkelstein (1999), Hawkes et al. (2011, 2014), Liu et al. (2013), Du et al. (2014), Wu and Hillston (2015), Wang and Cui (2011), Yi and Cui (2017), Cui et al. (2007, 2012, 2014, 2016, 2017).
This talk on survey of maintenance indexes intends to provide a clear picture on this aspect, which may be useful in the applications and further research work.
Recent development on maintenance indexes
|
Theoretical Advances in System Reliability
Theoretical Advances in System Reliability Fetemeh MOHAMMADI, Eduardo SAENZ-DE-CABEZON, Henry WYNN
The paper continues the application of monmial ideals to system reliability which arose from the observation that the structure of the failure set for a system with multilevel components was similar to that of a monomial ideal. The natural inclusion-exclusion formulae for system reliability can then be mapped into the minimal free resolution of the ideal via the Hilbert series of the ideal. Here this is extended to the LCM filtration which covers the number of elementary cuts (generators) in failure situation. There are other filtrations and give way to summarise the robustness of systems. Reliability measures for coherent systems based on LCM filtrations of monomial ideals
Fatemeh MOHAMMADI, Patricia PASCUAL-ORTIGOSA, Eduardo SAENZ-DE-CABEZON, Henry P. WYNN
The algebraic analysis of coherent systems uses the algebra of monomial ideals to compute reliability formulas and bounds for such systems. The authors have applied this framework to the analysis of several relevant systems such as networks, series-parallel systems, k-out-of-n and variants, etc. The present work introduces the algebraic operations of polarization and depolarization as a tool to study multistate systems via binary systems and vice versa. Polarization transforms a general monomial ideal into a squarefree monomial ideal that shares many relevant features with the original one.This idea can be applied to the algebraic analysis of coherent multistate systems. We provide several examples including multistate k-out-of-n systems. Polarization of monomial ideals for algebraic reliability analysis of multi-state systems
Arne HUSEBY
A binary monotone system is an ordered pair (E, phi) where E is the component set, and phi, the structure function of the system, is a binary function defined for all subsets A of E which is non-decreasing with respect to set inclusion. If all components have the same probability p of functioning, the system reliability can be expressed in terms of the reliability polynomial h(p). Two binary systems are equivalent if their reliability polynomials are equal. An important class of binary monotone systems is the class of undirected network systems. This class belongs to the larger class of matroid systems. A matroid is and ordered pair (F, M) where M is a family of incomparable subsets of F, called circuits, satisfying a set of axioms. A matroid system is a binary monotone system, (E, phi), which can be associated with a matroid (E u x, M) in such a way that the minimal path sets of (E, phi) can be recovered by extracting all the circuits in M containing the element x, and then deleting x from these circuits. A subclass of such systems is the class of orientable matroid systems. If (E, phi) is a 2-terminal undirected network system, it is orientable since directions can be assigned to all the edges. By using the properties of orientable matroid systems we obtain a way of constructing equivalent systems. Constructing Equivalent Systems of Orientable Matroid Systems
| |
12h30-14h00 | Lunch | |||||
14h00-15h00 |
Network reliability - 2
Network reliability - 2 Revaz KAKUBAVA, Nino SVANIDZE
In this paper we treat the problem of maintenance modeling and analysis for large scale complex systems. We discuss some specific factors the influence of which are particularly important for construction, investigation and applications of mathematical models for dependability and performance analysis, as well as for optimal synthesis of territorially distributed networks. The arguments are given for assertion that the consideration of above factors within the framework of classical mathematical theory of reliability (repairman problem) and classical queuing theory is almost impossible. The conclusion is made that it is necessary to develop novel queuing models within the framework of network maintenance problem. This problem is formulated in the paper. The diagram is adduced illustrating the network maintenance problem. On Network Maintenance Problem. A New Approach.
Christian TANGUY
We provide the asymptotic expansion of the all- and $k$-terminal reliability of the complete graph $K_n$ for large $n$, improving on the results by Gilbert and by Frank and Gaul, which were restricted to the first order and $k = 2$. We have also numerically investigated recently defined performance indices such as the average reliability of a graph, and the average of the average of the reliability on all simple graphs having $n$ vertices, which can be expressed as a simple integral of the all-terminal reliability of $K_n$. We propose asymptotic expressions of these quantities in the $n \to \infty$ limit. Complete graph reliabilities: Asymptotic results
Machine learning have achieved great success in solving problems in a wide variety of fields. Interestingly, the machine learning methods can be employed to solve queuing problems. One such approach is proposed in this paper. At this moment, we point out the following advantages of this approach: it is possible to calculate flows with an arbitrary distribution, and probably make calculations faster in comparison with simulation modeling. Application of machine learning methods for modeling and calculation of queuing networks
|
Stochastic processes in reliability - 2
Stochastic processes in reliability - 2 Pepa RAMíREZ-COBO, Yoel YERA, Rosa E. LILLO
The Batch Markovian arrival process (BMAP) constitutes a general class of point processes suitable for the modeling of dependent and correlated batch events (as arrivals, failures or risk events). BMAPs have been widely considered in the literature from a theoretical viewpoint, especially from a queueing theory perspective. However, less works are devoted to study statistical inference which is of crucial importance in reliability contexts, often characterized by dependent and simultaneous failures. In this work, we consider estimation for a wide subclass of BMAPSs, namely, the Batch Markov-modulated Poisson processes (BMMPP) which generalize the well-known Markov-modulated Poisson process . A matching moments technique, supported by a theoretical result that characterizes the process in terms of its moments, is proposed to such aim. Numerical results with both simulated and real datasets will be presented to illustrate the performance of the novel approach. Simultaneous failure modeling by the Batch Markov-Modulated Poisson process
Yoel G. YERA, Rosa E. LILLO, Pepa RAMíREZ-COBO
The Batch Markovian arrival process (BMAP) constitutes a general class of point processes suitable for the modeling of dependent and correlated simultaneous events (as arrivals, failures or risk events). BMAPs have been widely considered in the literature from a theoretical viewpoint, being the identifiability one of the most studied aspects because of its implication for statistical inference. In this work, we prove the identifiability of a wide subclass of BMAPs, namely, the Batch Markov-modulated Poisson process (BMMPP) which generalizes the well-known Markov-modulated Poisson process (MMPP). New results on the Batch Markov-modulated Poisson process
Weiwen PENG, Qiuzhuang SUN, Jiaxiang CAI
Recurrent failure data of repairable systems are getting information rich, containing detailed information about failure modes, failure causes, maintenance records, and so forth in addition to failure time themselves. In this paper, one type of recurrent failure data is analyzed, where the failures are categorized into two groups, i.e., the failures caused by operators’ misuse, and the failures occurred in normal operation. An interesting question is how to analyze the recurrent data with two different types of failure, where the misuse-induced failures may affect the event process of normal-operation failures in an uncertain way. This paper attempts to address this question by introducing a bivariate point process model and its corresponding Bayesian analysis method. The new model is developed for the joint modeling of misuse-induced failures, normal-operation failures and the influence of misuse-induced failures on normal-operation failures. The Bayesian method is developed for parameter estimation with uncertainty quantified. A case study with recurrence data from manufacturing systems is presented to demonstrate the proposed method. A bivariate point process model for repairable systems with two dependent failure modes
|
Case studies in reliability analysis
Case studies in reliability analysis Kai HENCKEN, Paolo OLIVA
The breakdown of the electrical insulation of high voltage components is typically described by a Weibull distribution with the scale parameter depending on the applied voltage or electric field by an inverse power law. In order to predict the lifetime of such components one needs to extrapolate the results performed on small samples and for a homogenous field to larger dimensions and inhomogenous electric field distributions.
In this paper we discuss some aspects of this process: We compare the influcence of the parameter estimation method (either maximum likelihood or least square regression to the order statistics) applied to both uncensored and progressively censored data. The results are rescaled to the real geometry and a quantile of the lifetime is estimated. Due to the small amount of data available, the accuracy of this predicted lifetime needs to be evaluated as well. Single sided confidence bound for the lifetime are calculated using the Fisher information approach. We find that especially this leads to substantial underestimation of the expected lifetime. Estimation of parameter and confidence bounds for a Weibull model with inverse power law dependency applied to high voltage insulation components
Aurore ARCHIMBAUD, Carole SOUAL, Francois BERGERET, Sophie D'ALBERTO, Thierry THEBAULT, Christian BONNIN
Aerospace Integrated Circuit (IC) reliability increasingly demands a high level of performance. This article is based on a collaborative project between a production company in the aerospace industry (Atmel) and a statistical company (Ippon Innovation). The main objective is to develop an innovative advanced tool to detect multivariate outliers in small samples, based on measurements of thousands of parameters, which is called a high dimensional situation in statistics. After presenting the context and current computational methods
used in this industry for the screening of abnormal dice, the article introduces two methods that are designed for dealing with the special case of high- dimensional datasets: the ROBPCA and the GAT algorithms. The two methods are compared in a case study. GAT has a definite advantage over other methods in detecting atypical instances. Finally, the integration of this algorithm with the production tool provides the ability to go back to the real measurements involved in the revealed anomalies. The use of a sound statistical method to adress small samples and high dimension data is needed to detect reliability issues in the space industry. High dimensional outlier screening of small dice samples for Aerospace IC reliability
Maxime REDONDIN, Laurent BOUILLAUT, Dimitri DAUCHER, Nadège FAUL
The reliability of road infrastructure play a major role in road safety. This is especially true if we are interested by autonomous cars traffic able to read road markings. This kind of vehicles needs an accurate maintenance strategy to guarantee a road with marking perceptible. To simplify the study of a road, a solution based on an Agglomerative Hierarchical Clustering (AHC) segments a road according to the retroreflection level in time. If the follow-up of the maintenance for markings doesn’t exist then a maintenance detector could estimate laying date. However, this strategy needs regular inspection data. Currently, French roads are irregularly inspected once a year and missing data appear. Three options are confronted: accept missing data, estimate missing data thanks to a linear interpolation or an original approach based also an AHC. The last possibility evaluates the most reliable estimations for missing data. This approach is a first step to analyze the useful life of markings with a Weibull analysis. The broken centerline of the French National Road 4 is considered to illustrate our approach. RECONSTRUCTION OF MISSING RETROREFLECTIVE DATA ACCORDING TO YEARLY INSPECTION OF MARKINGS
|
Accelerated degradation and life testing
Accelerated degradation and life testing Xiujie ZHAO, Min XIE
Step-stress accelerated degradation test (SSADT) has become a common approach to assessing lifetime distribution for highly reliable products. The modeling of SSADT has been intensely investigated under the assumption that there is only on underlying degradation process. However, many products suffer from two or more degradation processes that directly lead to failures. The aim of this paper is to model SSADT data with two dependent degradations described by a bivariate inverse Gaussian process. The drift parameter of each process is assumed to be influenced by one stress factor. A failure is regarded to occur if either or both degradation processes reach the corresponding thresholds. The maximum likelihood estimation framework is established to model the data from SSADT. An illustrative example is presented to illustrate the proposed method. Modeling step-stress accelerated degradation tests with the bivariate inverse Gaussian process
Huimin FU, Junxing LI, Zhihua WANG, Yongbo ZHANG
The issue of accelerated degradation analysis, which has been widely studied recently, plays an important role in assessing reliability and making maintenance schedule for highly-reliable and long-life products. Therefore, in this paper, a well-adopted form of Wiener process with measurement errors is used for modeling the step-stress accelerated degradation test (SSADT). The probability distribution function (PDF) and the cumulative distribution function (CDF) of the failure time are derived in closed expressions based on the concept of the first hitting time (FHT). To model the accelerated degradation relationship, the drift parameter of Wiener process is assumed to depend on the accelerated variable. Moreover, an EM algorithm is adopted to estimate the unknown parameters. Modeling step- stress accelerated degradation using Wiener process with measurement errors
Yih-Huei HUANG
In accelerated failure time (AFT) models, the covariate has effect on expanding or contracting the life time. When a covariate is not observed accurately but is repeat measured with random errors, an intuitive way is to use the average of replicates as the covariate. Such naive analysis yields bias and inconsistent estimate. We investigate how the naive estimating function is biased and motivate our correction method. The proposed method requires no distribution assumptions on covariate or random error. In other words, it is a functional method in the context of measurement error problems. We assess the performance of our method by a simulation study.
Bias correction for the accelerated failure time model with right-censored observations and repeat measurements
|
Maintenance Modeling and Analysis - 2
Maintenance Modeling and Analysis - 2 Bram DE JONGE, Lisa MAILLART, Oleg PROKOPYEV
Existing studies on maintenance optimization for multiple machines generally ignore the required travel times to move from one machine to another. We consider the problem of a single repairman who is responsible for the maintenance activities of a set of geographically distributed machines with condition monitoring. The problem is formulated as a Markov decision process with to aim to obtain insights on when to relocate and when to carry out preventive and corrective maintenance activities. Combined condition-based maintenance and repairman routing optimization
Satoshi MIZUTANI, Toshio NAKAGAWA
This paper proposes an extended overtime policies for a cumulative damage model with damage level Z to replace the unit. Successive shocks occur at random time, and the unit suffers some damage due to the shocks. The unit fails when the total damage has exceeded a prespecified level K. The unit is replaced at failure or Nth shock over level Z (0 <= Z <= K),
whichever occur first. For such a model, we obtain the expected cost rate, and discuss the optimal number of N and level Z which minimizes the expected cost rate. Further, numerical examples are given, and suitable discussions are made.
Extended Overtime Policies for a Cumulative Damage Model
Yuqiang FU, Xiaoyan ZHU, Tao YUAN
This paper studies a new maintenance policy for a system, which consists of multiple functionally interchangeable components. The maintenance policy combines component reallocation, minimal repair and replacement of components. The minimal repairs are operated for the emergency failures of components. Without consuming additional resource, the system increases its reliability and extends its useful time by reallocating its components over a certain period before touring out and replacing the entire wear-out system. In this paper, we establish a mathematical model to determine the time for component reallocation and the time for component replacement with the objective of minimizing annual expected maintenance cost. We present a numerical example to illustrate the applications and insights of the model. OPTIMAL REALLOCATION AND REPLACEMENT MAINTENANCE
|
Optimization methods in reliability
Optimization methods in reliability Yi-Fei YUAN, Yan-Fu LI
Power grid is one typically type of critical infrastructures. Although it is heavily protected, its failure rate is still higher than expected. Resilience is a relevant concept to deal with such phenomenon. The resilient power grid should be able to ‘bounce back’ to its normal operation condition in a short time at a low cost. Different from the previous studies which are mainly focused on resilience assessment, this work aims to optimize the recovery process of the power gird after a cascading failure. POWER GRID RECOVERY OPTIMIZATION AFTER A CASCADING FAILURE
Nacef TAZI, Eric CHATELET, Youcef BOUZIDI
Wind turbine systems are able to provide the intended performance at different states. Reliability and maintainability are important not only to provide the electricity under the constraints of demand, but also to minimize unavailability costs and time. Thus, an effective maintenance policy should increase system performance so it can meet the electricity demand, and also reduce the consequences of deficiency (unavailability). In this study, we continue the work made on maintenance optimization under constraints of costs, and also introduce wind speed states and different load types and states in availability assessment. Maintenance policy is based on preventive repair and minimal repair policies. The objective function of this work is to find the system replacement policy that minimizes the costs of preventive and corrective maintenance under the constraints of system availability and maintenance expected duration limitation. Availability of system is also based on wind speed and load states. The presented model is based on the universal generating function (UGF) applied to evaluate multi-state system (MSS) performance. OPTIMIZING MULTI-STATE POWER SYSTEM MAINTENANCE POLICY UNDER CONSTRAINTS OF LOADS AND COSTS: APPLICATION TO WIND FARMS
Amos GERA
A comparative study is presented between various extensions of a start-up demonstration procedure which is based on combinatorics. The expected number of required tests and the probability of accepting the tested unit are derived using a set of auxiliary functions. A constrained optimization problem is solved for minimizing the number of required tests subject to some confidence level requirements. The variables for this optimization include the total number of successes, failures, and the maximal lengths of runs of successes and failures. Some models for a start-up demonstration test procedure
|
15h10-16h10 |
k-out-of-n systems - 2
k-out-of-n systems - 2 Anna DEMBIńSKA
In this paper we present techniques of computing moments of order statistics arising from independent but not necessarily identically distributed discrete random variables. We derive expressions for single moments of such order statistics. Next, we apply these expressions to establish moments of lifetimes of k-out-of-n systems consisting of heterogeneous components with discrete lifetimes. In particular we obtain formulas describing expectations and variances of lifetimes of such systems. Computing moments of discrete lifetimes of k-out-of-n systems with heterogeneous components
Amos GERA
A practical way for estimating the reliability of a kms-within-m-out-of-n system is presented. Numerical calculations show full correlation to those given in other references. Then, the combined ks-out-of-n and kms-within-m-out-of-n: G system is examined. Its superiority over the individual procedures is demonstrated. The combined ks-out-of-n and kms-within-m-out-of-n problem
Generally, the failure of a system can be caused by both internal degradation and other external factors. Among the external factors, intentional attack which can choose attack strategy according to the system’s protection strategy is attracting a great deal of attention in recent years. When the system is exposed to intentional attacks, a defender which makes decisions about distribution of defense resources among different defensive measures is essential. Therefore, defending strategy against intentional external attacks becomes more and more important in system reliability and defense theory.This work studies the defense and attack for a consecutive system with n components, which fails if at least k consecutive components fails. At this stage, we first investigate the optimal attack strategies for given defense strategy. The optimal defense strategy will be explored in the future. DEFENDING A CONSECUTIVE K-OUT-OF-N SYSTEM AGAINST AN ATTACKER
|
Lifetime Data Analysis - 2
Lifetime Data Analysis - 2 Sukhmani SIDHU, Kanchan JAIN, Suresh SHARMA
Multivariate survival analysis comprises of event times that are generally grouped together in clusters. Observations in each of these clusters relate to data belonging to the same individual or individuals with a common factor. Frailty models can be used when there is unaccounted association between survival times of a cluster. The frailty variable describes the heterogeneity in the data caused by unknown covariates or randomness in the data. In this article, we use the generalized gamma distribution to describe the frailty variable and discuss the Bayesian method of estimation for the parameters of the model. The baseline hazard function is assumed to follow the two parameter Weibull distribution. Data is simulated from the given model and the Metropolis-Hastings MCMC algorithm is used to obtain parameter estimates. It is shown that increasing the size of the dataset improves estimates. It is also shown that high heterogeneity within clusters does not affect the estimates of treatment effects significantly. The model is also applied to a real life dataset.
Bayesian Estimation of Generalized Gamma Shared Frailty Model
Kunsong LIN, Yunxia CHEN
Reliability growth has always been an area of great interest. In the literature, most of the existing models regard failure intensity as constant in each test phase when considering test-find-test strategy. However, this assumption is questionable. This paper proposes a new model considering time-varying failure intensity in each phase. Moreover, the proposed model assumes that the scale parameters in each phases are a same constant, which utilizes more information of data from different phases. The example with real reliability growth data shows that the assumptions are reasonable, and the proposed model outperforms the existing models.
A multi-phase reliability growth model considering test-find-test strategy
|
Stochastic processes in reliability - 3
Stochastic processes in reliability - 3 Ming LUO, Shaomin WU
Nowadays, many products consist of both software and hardware subsystems (e.g. the 3C products (Computer, Communication and Consumer Electronics) and cars). As a result, the causes of warranty claims of those products can also be due to software or/and hardware failures. However, existing research on warranty management mostly focuses on the hardware or software systems separately, and interactions between them are ignored. This paper analyses warranty costs incurred due to the failures of the two subsystems. It considers two warranty policies for software systems, depending on the methods of software updating. Numerical examples are given to illustrate the proposed models.
Warranty cost analysis considering hardware and software failures
Bram WESTERWEEL, Rob BASTEN, Geert-Jan VAN HOUTUM
In recent years, additive manufacturing (AM), also known as 3D printing, has developed rapidly. A promising application of this technology comes in the form of on-site and on-demand printing of spare parts. However, given the current limitations of AM technology, printed parts are subject to a lower reliability compared with their regular counterparts. Therefore, we investigate to what extent this on-site AM capacity is complementary to existing means of resupply. We consider a periodic-review, spare part inventory control system with three supply sources. Regular supply of spare parts is done via periodic resupply shipments. In between these shipments, backorders for spare parts can be met by means of expediting or by printing a part. We obtain insight into the operational cost reductions that can be attained by investing in on-site AM capacity and into how this capacity should be operated. On-site additive manufacturing of temporary spare parts
Lucie BERNARD, Philippe LEDUC
We are interested in estimating a failure probability, defined as a threshold-exceeding probability of a random variable which is costly to simulate and whose distribution is unknown. In view of the restricted number of available observations of this random variable, classical Monte Carlo simulation methods can not be used. That is why we adopt a Bayesian approach and assume that the failure probability is a realization of a random variable relied on the so-called Gaussian process regression model. In order to provide a reliable estimation of the failure probability, it is desirable to learn as much as possible about the posterior distribution of this random variable. Considering that this is not obvious, we propose an alternative random variable whose good properties, in term of simulation, improve the estimation of the failure probability. In particular, we show that there exists a convex order between these two variables and we exploit the resulting properties. An alternative estimator of a probability of failure
|
Prognostic and health management
Prognostic and health management Manuel ESTEVES, Eusebio NUNES
The Battery Management Systems (BMS) brought a new impetus in battery energy management which lead to an increase of battery life. But the BMS fails when the State of Charge (SoC), State of Health (SoH), State of Life (SoL) or Remaining Useful Life (RUL) prognostics systems doesn't provide the required accuracy. Despite the increase of complexity and accuracy of battery models, the low performance persist on floating temperature and load profiles. With the development of innovative products on wide-ranging applications, the battery materials, technologies, reliability and safety are being pressed to their limits. Therefore, a huge amount of work still to be done, not only on the development of new battery technologies but also on the BMS, battery models and metrics accuracy improvement. The paper gives an overview of the applicability, accuracy, weaknesses and advantages of the recent battery models. Will also be discussed how the Prognostics Health Management (PHM) it can support a technologic impetus on battery affairs with battery models and metrics accuracy improvements. OVERVIEW ON BATTERY MODELS APPROACHES
Toufik AGGAB, Frédéric KRATZ, Pascal VRIGNAT, Manuel AVILA
In this paper, we propose an approach for failure prognosis by online estimation of the residual life before the system performance requirement is no longer met. It consists in anticipating the onset of failures of a system for a specific mission. The proposed approach is based on the system behavior model. It was conducted in two phases: the first used the data available on this system to estimate unmeasured states and relevant parameters which are able to characterize system performance, given that the degradations may remain partially or totally hidden. To carry out this phase, we used an observer. In the second phase, to estimate online the duration before the system performance requirement is no longer met, the historical states and parameters obtained in the first phase were exploited. Thus, assuming unknown the models describing the parameter dynamics, we used time series prediction methods. To illustrate the proposed failure prognosis approach, a Li-ion battery was used. PROGNOSIS METHOD USING AN OBSERVER AND TIME SERIES PREDICTION METHODS
Dacheng ZHANG, Catherine CADET, Christophe BERENGUER, Nadia YOUSFI-STEINER, Piero BARALDI, Enrico ZIO
This paper presents a prognosis approach based on an ensemble of two models for the remaining useful life (RUL) prediction of a fuel cell stack. Two different physics-based models are used, and the prognosis procedure is implemented using particle filters and fed by measurements taken at different levels in the system. The first particle filter receives in input a signal directly observable and related to the component degradation (e.g. the fuel cell output voltage) which can be frequently and easily measured and relies on a simplified model of the degradation trend. The second particle filter is fed by measurements from the physical characterization of the system, which are seldom acquired by periodic inspections, and uses a model of the health state evolution, from which the degradation state is estimated. The outcomes of the two particle filters are aggregated using a local approach to obtain the ensemble predictions. A Study of Local Aggregation of an Ensemble of Models for RUL Prediction
|
Degradation modelling and analysis - 3
Degradation modelling and analysis - 3 Waltraud KAHLE
We consider a Wiener process with linear drift for degradation modeling. Regularly, inspections are carried out, and the level of degradation is measured. At each inspection point a reduction of the degradation level is carried out. In the talk, we consider the influence of such maintenance actions to the further development of the degradation process and the resulting lifetime distribution. A connection between virtual age in Kijima-type models and degradation level in the underlying degradation process is developed. Further, estimators for the process parameters as well as for the degree of repair are developed. INCOMPLETE REPAIR IN DEGRADATION PROCESSES: A KIJIMA-TYPE APPROACH
Zeina AL MASRY, Bruno CASTANIER, Fabrice GUERIN, Mitra FOULADIRAD
There is no longer any need to demonstrate the relevance of the degradation versus the classical life test plans. One of the main advantages is an increasing amount of collected data and its ability of extrapolating the lifetime distributions when degradation models are available. Nevertheless, further developments for classical and accelerated tests in the context of degradation are still required especially when the degradation pattern is clearly non-homogeneous. The aim of this paper is to propose a degradation test plan for non-homogeneous gamma processes for products with cumulative degradation evolution. We first, propose a test plan, which is based on minimizing the asymptotic variance of the true reliability of the products. Next, we apply the proposed test on a specific parametric form of the shape function of a gamma process. A degradation test plan for a non-homogeneous gamma process
Xun XIAO
The degradation modelling has become a critical topic when investigating and predicting the reliability and lifetime of the modern engineering product and system. Various statistical models have been proposed to model the degradation of different products. A lot of existing researches assume that the degradation models are invariant with respect to time. However, the operation profile and environment of the products may change or subject to some external shocks. These could change the degradation pattern of the products. In particular, we consider the problem of testing a change in the trend of two linear degradation models under the periodical inspection plan. One is the linear regression model and the other is the Wiener process. Some simple statistics are recommended to test the existence of the shocks for these two models. The consequences of the misspecification of these two models are investigated via simulation study. A real case study on railway track degradation is presented to illustrate our procedure. Test a Change in Trend in Linear Degradation Models
|
Multi-State system reliability - 2
Multi-State system reliability - 2 Xiang-Yu LI, Yan-Feng LI, Hong-Zhong HUANG, Enrico ZIO
Phased-mission systems (PMSs) are widely used, especially in the aerospace industry. Travelling in the outer space, these systems are exposed to cosmic rays, such as the Galactic Cosmic Rays, which can cause a significant impact on the electronics of the equipment. In this paper, a model for PMS reliability assessment is proposed considering the random shocks coming from cosmic rays. To simplify the model for each phase and reduce the number of system states, the modularization method is applied. SMP (Semi-Markov Process) is adopted to deal with the complex dynamic behaviors in the modules. A Monte Carlo simulation procedure is proposed to assess the reliability of the PMS. As case study, the reliability of a phased AOCS (Altitude and Orbit Control System) is considered. Integrating random shocks into reliability assessment of Phased-mission system
Yu LIU, Tangfan XIAHOU, Tao JIANG
As epistemic uncertainty is inevitable in reliability analysis, this paper extends the composite Birnbaum importance measure of multi-state systems in the context of epistemic uncertainty. The epistemic uncertainties associated with components’ state discrimination are characterized by an evidential Markov model. The epis-temic uncertainty propagation from components to the entire system is manipulated by an evidential network. Additionally, the conditional belief and plausibility reliabilities can be computed by the evidential network when hard evidences or vacuous evidences are input. A new extended composite Birnbaum importance measure is then defined and converted into an optimization problem. A multi-state bridge system is exemplified to demonstrate the proposed importance measure. AN EXTENDED COMPOSITE BIRNBAUM IMPORTANCE MEASURE OF MULTI-STATE SYSTEMS UNDER EPISTEMIC UNCERTAINTY
Shijia DU, Rui KANG
In this paper, we develop a multistate model for resilience analysis of a distributed generation system. A quantitative index, the optimal recovery time, is defined to quantify the resilience of multistate systems and a simulation-based method is presented for its evaluation. The developed method is applied on a distributed generation system to demonstrate its applicability. A multistate model for resilience analysis of a distributed generation system
|
16h10-16h30 | Coffee break | |||||
16h30-17h15 | Zhisheng YE Risk Quantification from Degradation Data Degradation studies are often used to assess reliability of products subject to degradation-induced soft failures. They are also important tools for risk assessment of emerging contaminants (ECs), a new environmental threat due to the increasing consumption of newly synthesized compounds. Motivated by degradation tests on some ECs, the main purpose of this work is to construct confidence intervals for some important quantities that reflect the risk of the ECs. Risk assessment of each individual EC is often done through a competition experiment. This results in a two-dimensional degradation process. On the other hand, water treatments need to deal with degradation of several ECs in the waters. Therefore, we will investigate interval estimation for multivariate Wiener processes. Block effects in the degradation tests will also be considered. Risk Quantification from Degradation Data Chair: N. BALAKRISHNAN, Room A | |||||
17h15-17h30 | Closing session, towards MMR 2019: Olivier GAUDOIN (Room A) |