Tuesday, December 6, 2011

Markov-based Reliability Models. Part 1.

Hello dear readers,

Let me discuss a group of reliability models that have one common feature. These models are based on Markov chains.

Some history. In 1906, Andrey Markov presented the study of an important new type of chance processes. In this process, the outcome of a given experiment can affect the outcome of the next experiment. At the first time, a term ”chain” was used by Markov in [A. A. Markov. Rasprostranenie zakona bol’shih chisel na velichiny, zavisyaschie drug ot druga. Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 15(2):135–156, 1906.]. He has produced the first results for these processes, purely theoretically. A generalization to countably infinite state spaces was given later by A. Kolmogorov in 1936. The Markov chains are related to Brownian motion and the ergodic hypothesis, two topics in physics which were important in the early years of the twentieth century, but Markov appears to have pursued this out of a mathematical motivation, namely the extension of the law of large numbers to dependent events. 
Nowadays, the Markov chains are widely used in the engineering domain for system analysis, modeling, and estimation of various non-functional system properties. An infinite number of books describe different types (discrete, continues, absorbing, ergodic, regular etc.) of Markov chains, using a bunch of complex probabilistic terms. The most understandable representation of a Markov chain is a directed state graph. The nodes of this graph - define state space of the system. The arcs - transitions form one state to another. These arcs are weighted with transition probabilities. The sum of the transition probabilities of outgoing arcs of each node equals to 1.

The next example gives an intuition into application of a Markov chain for simple reliability analysis. Assume, we have a faulty system that starts its regular operation. During this operation, a fault can be activated with a known probability P_FA. The fault activation leads to an erroneous system state. However, with the probability P_ED an error can be detected. After that, the error can be corrected with a probability P_EC that restores the original system state. Otherwise, the system stops with an error message (Fail stop) in oder to prevent a system failure. In the case if the erroneous system state is not identified, the systm fails.
This Markov chain describes behavior of the discussed system. 'Regular system operation' is an initial state of the system. We can see that with the probability P_FA it moves to 'erroneous system operation' and  successfully completes its operation with the probability (1-P_FA). Error detection behavior is modeled in the same manner. The final states 'Intended completion', 'Fail stop', and 'System failure' represent three possible system execution scenarios. In the case if the P_FA, P_ED, and P_EC are known, we are able to compute probabilities of these scenarios. For instance, the probability of a system failure equals to

P_SF = (P_FA)*(1-P_ED) + (P_FA)*(P_ED*P_EC)*(P_FA)*(1-P_ED) + (P_FA)*((P_ED*P_EC)*P_FA)^2*(1-P_ED) + ... =  P_FA*(1-P_ED) / (1-P_ED*P_EC*P_FA)

(1-P_SF)  represents the probability of failure free system execution and can be considered as a measure of system reliability. 

This trivial example demonstrates a very general idea of the Markov chain application for system reliability analysis. The state space of the Markov chain can be much bigger and e.g. distinguish between fault activations in different system components and/or different types of faults. The arcs also can represent a variety of system activities besides the fault activations. For example, error propagation or even control flow between the system components can be taken into account. In the next post I will discuss several more advanced Markov-based reliability models.

No comments:

Post a Comment