Final probabilities of states. Probabilities of states of the QS. Limit probabilities of states What is the sum of the probabilities of all states of the system

Consider a mathematical description of a Markov process with discrete states and continuous time

using the example of the random process from Problem 15.1, the graph of which is shown in Fig. 15.1. We will assume that all transitions of the system from state 5 to 5 occur under the influence of the simplest streams of events with intensities λ. (i, j == 0, 1.2, 3); so, the transition of the system from the state S 0 to 5, will occur under the influence of the flow of failures of the first node, and the reverse transition from state to S 0 under the influence of the flow of "completion of repairs" of the first node, etc.

The state graph of the system with the intensities at the arrows will be called marked(see fig.15.1). The system in question S has four possible states. 5q, iSj, S 2, 5"->-

The probability of the i-th state called the probability pit) that at the moment t the system will be in state 5 (.. Obviously, for any moment t the sum of the probabilities of all states is equal to one:

Consider the system at the moment t and, setting a small interval At, find the probability p 0(t + At) the fact that the system at the moment (ί + Δί) will be in state 50. This is achieved in different ways.

1. The system at the moment t with probability p Q (t) was in a state of 50, and during the time At did not come out of it.

The system can be brought out of this state (see the graph in Fig. 15.1) with the total simplest flow with the intensity (λ01 + λ02), i.e. in accordance with (15.7) with a probability approximately equal to (λ01 + λ0.,) Δί. The probability that the system will not leave state 50 is equal to [ΐ- (λοι + λ0.,) Δί]. The probability that the system will be in state 50 according to the first method (i.e., that it was in state 50 and will not leave it in time Δ) is equal by the probability multiplication theorem

2. The system at the moment t with probability p ^ t)(or p 2(t)) was in state 5) or S2 and during At passed to state 50.

By a flux of intensity λ10 (or λ20 - see Fig. 15.1), the system will go to state 50 with a probability approximately

equal to λ, 0Δί (or λ20Δί) The probability that the system will be in state 50 by this method is equal to Ρι (ί) 10Δί (or ρ2 (ί) λ20Δί).

Applying the theorem of addition of probabilities, we get whence

Passing to the limit at At → 0 (approximate equalities associated with the application of formula (15.7) will transform into exact ones), we obtain the derivative on the left-hand side of the equation R" 0 (ί) (we denote it for simplicity R"0):

Received a differential equation of the first order, i.e. an equation containing both the unknown function itself and its first-order derivative.

Reasoning similarly for other states of system 5, one can obtain the system differential equations Kolmogorov for probabilities of states:

(15.9)

Let us formulate a rule for composing the Kolmogorov equations. IN the left side of each of them is the derivative of the probability of the i-th state. On the right-hand side - the sum of the products of the probabilities of all states (from which arrows go to a given state) by the intensity of the corresponding streams of events, minus the total intensity of all streams that bring the system out of this state, multiplied by the probability of the given (i-th state).

In system (15.9) there are one less independent equations than the total number of equations. Therefore, to solve the system, it is necessary to add equation (15.8).

The peculiarity of the solution of differential equations in general is that it is required to set the so-called initial conditions, i.e. in this case, the probabilities of the states of the system at the initial moment t= 0. So, for example, it is natural to solve the system of equations (15.9) provided that at the initial moment both nodes are in good order and the system was in state 50, that is, under initial conditions R 0 (0) = 1, R x (o) = R 2 (O) = R 3 (O) = 0.

The Kolmogorov equations make it possible to find all state probabilities as functions of time. Of particular interest are the probabilities of the system R-(!) in limiting, stationary mode, those. at t →∞, which are called extreme(or final) probabilities of states.

In the theory of stochastic processes, it is proved that if the number of states of the system is finite and from each of them it is possible (in a finite number of steps) to go to any other state, then the limiting probabilities exist.

Limiting probability of a state S j has a clear meaning: it shows the average relative residence time of the system in this state. For example, if the limiting probability of a state is 50, i.e. R 0 = 0.5, this means that, on average, half of the time the system is in state 50.

Since the limiting probabilities are constant, replacing their derivatives in the Kolmogorov equations by zero values, we obtain a system of linear algebraic equations describing the stationary regime. For the system S with the state graph shown in Fig. 15.1, such a system of equations has the form:

(15.10)

System (15.10) can be compiled directly from the labeled state graph, if we follow the rule according to which on the left in the equations is the limiting probability of a given state p r multiplied by the total intensity of all flows leading from the given

state, and on the right is the sum of the products of the intensities of all flows included in i-th state, on the probabilities of those states from which these flows originate.

15.2. Find the limiting probabilities for the system S from Problem 15.1, the state graph of which is shown in Fig. 15.1, at

Decision. The system of algebraic equations describing the stationary regime for a given system has the form (15.10) or

(15.11)

Here, instead of one "extra" equation of system (15.10), we wrote down the normalization condition (15.8).

Solving system (15.11), we obtain R() = 0,40, p i = 0.20, R 2 = 0,27, R 3 = 0.13, i.e. in the limiting, stationary mode, the system S on average 40% of the time will be in the 5H state (both nodes are in good order), 20% - in the 5th state (the first node is being repaired, the second is working), 27% - in the state S 2 (the second node is being repaired, the first is working) and 13% of the time - in state 53 (both nodes are being repaired).

15.3. Find the average net income from operation in the stationary mode of system 5 under the conditions of problems 15.1 and 15.2, if it is known that in a unit of time the correct operation of the first and second nodes brings income, respectively, in 10 and 6 den. units, and their repair requires costs of 4 and 2 days, respectively. units Evaluate the economic efficiency of the existing possibility of halving the average repair time for each of the two nodes, if at the same time it is necessary to double the cost of repairing each node (per unit of time).

Decision. From Problem 15.2 it follows that, on average, the first node is working properly for a fraction of the time equal to R{) + R 2 = = 0.40 + 0.27 = 0.67 and the second node is R 0 + p = 0.40 + 0.20 = = 0.60. At the same time, the first node is being repaired on average for a fraction of the time equal to R{ + p3 = 0.20 + 0.13 = 0.33, and the second node is R 2 + p 3 = 0.27 + 0.13 = 0.40. Therefore, the average net income per unit of time from the operation of the system, i.e. the difference between income and costs is

A twofold decrease in the average repair time for each of the nodes in accordance with (15.6) will mean a twofold increase in the intensity of the flow of "completion of repairs" of each node, i.e. now, and the system of linear algebraic equations (15.10), describing the stationary regime of the system Y, together with the normalization condition (15.8) will take the form:

Having solved the system, we get R 0 = 0.60, p, = 0.15, R 2 = 0,20, R 3 = 0,05.

Considering that R 0 + p 2 = 0,60 + 0,20 = 0,80, R 0 + p{ = 0,60 + + 0,15 = 0,75, R{ + p 3 = 0,15 + 0,05 = 0,20, R 2 + p 3 = 0.20 + + 0.05 = 0.25, and the cost of repairing the first and second unit is now respectively 8 and 4 den. units, we calculate the average net income per unit of time:

Since D1 is larger than D (by about 20%), the economic feasibility of accelerating unit repairs is obvious.

  • When writing the system (15.10), we excluded one "superfluous" equation.

Considering Markov processes with discrete states and continuous time, it will be convenient to imagine that all transitions of the system S from state to state occur under the action of some flows of events (call flow, failure flow, recovery flow, etc.). If all streams of events that transfer the system S from state to state are simplest, then the process taking place in the system will be Markov (the simplest nature of streams is a sufficient but not necessary condition for a Markov process, since the simplest stream does not have an aftereffect: the "future" does not depend on the "past").

If the system S is in some state S i, from which there is a direct transition to another state S j, then this can be represented in such a way that the system, while it is in the state S i, is acted upon by the simplest flow of events, which transfers it along the arrow. As soon as the first event of this flow appears, the system transitions from S i to S j. For clarity, on the state graph for each arrow, we put down the intensity of the flow of events that moves the system along this arrow. Let λ ij denote the intensity of the flow of events that transfers the system from the state S i to S j. Such a graph will be called marked (fig. 4.8). (let's return to the example of a technical device from d two knots).

Let us recall the states of the system:

S 0 - both nodes are in good order;

S 1 - the first unit is under repair, the second is operational;

S 2 - the second unit is under repair, the first is serviceable;

S 3 - both units are under repair.

We will calculate the intensities of the streams of events that transfer the system from state to state, assuming that the average time for repairing a node does not depend on whether one node is being repaired or both at once. This will be the case if a separate specialist is engaged in the repair of each unit.

Let us find all the intensities of the streams of events that transfer the system from state to state. Let the system be in state S 0. What flow of events brings it to the state S 1? Obviously, the flow of failures of the first node. Its intensity  1 is equal to one divided by the mean time of failure-free operation of the first node. The flow of events transferring the system back from S 1 to S 0 is the flow of “completion of repairs” of the first node. Its intensity  1 is equal to one divided by the average repair time of the first node. The intensities of the flows of events that transfer the system along all arrows of the graph in Fig. 4.9.

Having a labeled graph of system states, one can build a mathematical model of this process.

Let the system S be considered, having n possible states S 1, S 2,…, S n. Let's call the probability of the i-th state the probability P i (t) that at the moment t the system will be in the state S i. Obviously, for any moment, the sum of all state probabilities is equal to one.

(4.5)

Having at our disposal a labeled state graph, we can find all state probabilities P i (t) as a function of time. For this, they are compiled and resolved Kolmogorov equations - a special kind of differential equations in which the unknown functions are the probabilities of states.

Let's look at an example of how these equations are composed. Let the system S have 4 states: S 1, S 2, S 3, S 4, the labeled graph of which is shown in Fig. 4.10. Consider one of the state probabilities, for example P 1 (t). This is the probability that at time t the system will be in state S 1. We give t a small increment t and find P 1 (t + t) - the probability that at time t + t the system will be in state S 1. How can this happen? Obviously in two ways:

    at time t the system was already in the state S 1, and during the time t didn't come out out of him; or

    at time t, the system was in state S 2, and time t crossed over from it to S 1.

Let's find the probability of the first option. The probability that at time t the system was in state S 1 is equal to P 1 (t). This probability must be multiplied by the probability that being at the moment t in the state S 1, the system in time t will not pass from it either to S 2 or to S 3. The total flow of events that takes the system out of the state S 1 will also be the simplest, with an intensity of  12 +  13 (when superimposing - superposition - of two simplest flows, the simplest flow is again obtained, since the properties of stationarity, ordinariness and absence of aftereffect are preserved), which means , the probability that in time t the system will leave the state S 1 is ( 12 +  13) t, the probability that it will not be: 1- ( 12 +  13) t. Hence, the probability of the first option is equal to P 1 (t).

Let's find the probability of the second option. It is equal to the probability that at time t the system will be in state S 2, and in time t it will pass from it to state S 1, i.e. it is equal to P 2 (t)  21 t.

Adding the probabilities of both options (according to the rule of addition of probabilities), we get: P 1  (t + t) = P 1 (t)  + P 2 (t)  21 t.

Expand the square brackets, move P 1 (t) to the left, and divide both sides by t:

Let Устt go to zero; on the left, we obtain in the limit the derivative of the function P 1 (t). Thus, we write down the differential equation for P 1 (t):

, or, discarding the argument t from the functions P 1, P 2:

(4.6)

Arguing similarly for all other states, we write three more differential equations. As a result, we obtain a system of differential equations for the probabilities of states:

(4.7)

This is a system of 4 linear differential equations with four unknown functions P 1, P 2, P 3, P 4. One of them (any) can be discarded using the fact that
; express any of the probabilities P i in terms of others, substitute this expression in (4.7), and the corresponding equation with the derivative discard.

Let us now formulate a general rule for composing the Kolmogorov equations. On the left side of each of them is the derivative of the probability of some ( i th) state. On the right side - the sum of the products of the probabilities of all states, from which the arrows go to a given state, by the intensity of the corresponding streams of events, minus the total intensity of all streams that bring the system out of this state, multiplied by the probability of a given ( ith) state.

Using this rule, we write the Kolmogorov equations for the system S (Fig.4.9):

(4.8)

To solve the Kolmogorov equations and find the probabilities of the states, it is necessary to set the initial conditions. If we know exactly the initial state of the system S i (at t = 0) P i (0) = 1, and all other initial probabilities are equal to 0. So it is natural to solve equations (4.8) with the initial conditions P 0 (0) = 1, P 1 (0) = P 2 (0) = P 3 (0) = 0 (at the initial moment, both nodes are in good order). Usually, when the number of equations is more than two (three), they are solved numerically on a computer.

Thus, the Kolmogorov equations make it possible to find all state probabilities as a function of time.

Consider the mathematical description of a Markov process with discrete states and continuous time * using the example of a random process from Example 1, the graph of which is shown in Fig. 1. We will assume that all transitions of the system from the state to occur under the influence of the simplest streams of events with intensities; Thus, the transition of the system from state to will occur under the influence of the flow of failures of the first node, and the reverse transition from state to will occur under the influence of the flow of "completion of repairs" of the first node, etc.

The state graph of the system with the intensities at the arrows will be called marked (see fig. 1). The system under consideration has four possible states:

The probability of the i-th state is called the probability that at the moment the system will be in a state. Obviously, for any moment, the sum of the probabilities of all states is equal to one:

Consider the system at the moment and, setting a small interval, we find the probability that the system will be in the state at the moment. This is accomplished in a variety of ways.

1. The system was in a state at the moment with a probability, but did not leave it at the time.

The system can be brought out of this state (see the graph in Fig. 1) using the total simplest flow with intensity, ie in accordance with formula (7), with a probability approximately equal to ... And the probability that the system will not leave the state is equal to. The probability that the system will be in a state according to the first method (that is, that which was in a state and will not leave it during the time) is equal by the probability multiplication theorem:

2. The system at the moment with probabilities (or) was in the state, or for the time passed into the state.

With a flow of intensity (or - with - Fig. 1), the system will pass into a state with a probability approximately equal to (or). The probability that the system will be in a state by this method is (or).

Applying the theorem of addition of probabilities, we obtain

Passing to the limit at (approximate equalities associated with the application of formula (7) will turn into exact ones), we obtain the derivative on the left side of the equation (we denote it for simplicity):

Received a differential equation of the first order, i.e. an equation containing both the unknown function itself and its first-order derivative.

Reasoning similarly for other states of the system, one can obtain a system of Kolmogorov differential equations for the probabilities of states:

Let's formulate rule for drawing up Kolmogorov equations. On the left side of each of them is the derivative of the probability of the i-th state. On the right-hand side - the sum of the products of the probabilities of all states (from which arrows go to a given state) by the intensity of the corresponding streams of events, minus the total intensity of all streams that bring the system out of this state, multiplied by the probability of the given (i-th state).

In system (9), there are one less independent equations than the total number of equations. Therefore, to solve the system, it is necessary to add equation (8).

The peculiarity of the solution of differential equations in general is that it is required to set the so-called initial conditions, i.e. in this case, the probabilities of the states of the system at the initial moment. So, for example, it is natural to solve the system of equations (9) provided that at the initial moment both nodes are operational and the system was in a state, i.e. under initial conditions.

The Kolmogorov equations make it possible to find all state probabilities as functions of time ... Of particular interest are the probabilities of the system in ultimate stationary mode , i.e. at, which are called limiting (or final) probabilities states.

In the theory of stochastic processes, it is proved that if the number of states of the system is finite and from each of them it is possible (in a finite number of steps) to go to any other state, then the limiting probabilities exist.

The limiting probability of a state has a clear meaning: it shows the average relative residence time of the system in this state ... For example, if the limiting probability of a state, i.e., then this means that, on average, half of the time the system is in a state.

Since the limiting probabilities are constant, replacing their derivatives in the Kolmogorov equations by zero values, we obtain a system of linear algebraic equations describing the stationary regime. For a system with a state graph shown in Fig. 1), such a system of equations has the form:

System (10) can be compiled directly from the labeled state graph if we follow the rule according to which on the left in the equations is the limiting probability of a given state, multiplied by the total intensity of all flows leading from this state, and on the right is the sum of the products of the intensities of all flows entering the i-th state by the probabilities of those states from which these flows originate.

Let there be a physical system S with discrete states:

in which there is a Markov random process with continuous time (continuous Markov chain). The state graph is shown in Fig. 4.32.

Let us assume that all the intensities of the streams of events that transfer the system from state to state are constant:

in other words, all streams of events are the simplest (stationary Poisson) streams.

Writing down the Kolmogorov system of differential equations for the state probabilities and integrating these equations for the given initial conditions, we obtain the state probabilities as a function of time, i.e., functions:

for any t giving a total of one:

Let us now pose the following question: what will happen to the system S when Will the functions tend to some limits? These limits, if they exist, are called the limiting (or "final") probabilities of the states.

The following general position can be proved. If the number of states of the system S is finite and it is possible to pass from each state (in one or another number of steps) to each other, then the limiting probabilities of the states exist and do not depend on the initial state of the system.

In fig. 4.33 shows a graph of states that satisfies the stated condition: from any state the system can sooner or later go to any other. On the contrary, for a system whose state graph is shown in Fig. 4.34, the condition is not met. Obviously, if the initial state of such a system, then, for example, the state at can be achieved, but if the initial state, it cannot.

Suppose that the stated condition is satisfied, and the limiting probabilities exist:

We will denote the limiting probabilities by the same letters as the probabilities of states themselves, meaning that this time raise not variable quantities (functions of time), but constant numbers.

Obviously, the limiting probabilities of states, as well as the prelimiting ones, should add up to unity:

Thus, at, a certain limiting stationary regime is established in the system S: it consists in the fact that the system randomly changes its states, but the probability of each of them no longer depends on time: each of the states occurs with a certain constant probability. What is the meaning of this probability? It is nothing more than the average relative residence time of the system in a given state. For example, if the system S has three possible states: moreover, their limiting probabilities are equal to 0.2, 0.3 and 0.5, this means that after the transition to the steady state, the system S will, on average, two tenths of the time be in the state three tenths - in the state and half the time in the state The question arises: how to calculate the limiting probabilities of states

It turns out that for this, in the system of Kolmogorov equations describing the probabilities of states, it is necessary to set all left-hand sides (derivatives) equal to zero.

Indeed, in the limiting (steady state) regime, all state probabilities are constant, which means that their derivatives are equal to zero.

If all the left-hand sides of the Kolmogorov equations for the probabilities of states are set equal to zero, then the system of differential equations will turn into a system of linear algebraic equations. Together with the condition

(the so-called "normalization condition") these equations make it possible to calculate all limiting probabilities

Example 1. Physical system 5 has possible states: the labeled graph of which is given in Fig. 4.35 (each arrow has a numerical value of the corresponding intensity). Calculate the limiting probabilities of states:

Decision. We write the Kolmogorov equations for the probabilities of states:

Assuming the left-hand sides equal to zero, we obtain a system of algebraic equations for the limiting probabilities of states:

Equations (7.4) are the so-called homogeneous equations (without a free term). As is known from algebra, these equations determine values ​​only up to a constant factor. Fortunately, we have a normalization condition:

which, together with equations (7.4), makes it possible to find all unknown probabilities.

Indeed, we express from (7.4) all unknown probabilities in terms of one of them, for example, in terms of From the first equation:

Substituting into the second equation, we get:

The fourth equation gives:

Substituting all these expressions instead of into the normalization condition (7.5), we obtain

Thus, the limiting probabilities of states are obtained, they are ravi:

This means that in the limiting, steady-state mode, the system S will spend on average one twenty-fourth of the time in the state, half the time in the state, five twenty-fourths in the state, and one quarter of the time in the state.

Note that in solving this problem, we did not use one of the equations (7 4) at all - the third It is easy to verify that it is a consequence of the other three: adding all four equations, we get the identical zero. With equal success, solving the system, we could discard any of the four equations (7.4).

The method we used to compose algebraic equations for the limiting probabilities of states was reduced to the following: first write the differential equations, and then put the left sides in them equal to zero. However, you can write algebraic equations for limiting probabilities and directly, without going through the differential stage. Let us illustrate this with an example.

Let there be a physical system S with discrete states:

S 1, S 2, ..., S n,

in which there is a Markov random process with continuous time (continuous Markov chain). The state graph is shown in Fig. 23.

Let us assume that all the intensities of the streams of events that transfer the system from state to state are constant:

in other words, all streams of events are the simplest (stationary. Poisson) streams.

Writing down the Kolmogorov system of differential equations for the state probabilities and integrating these equations for the given initial conditions, we obtain the state probabilities as a function of time, i.e., n functions:

p 1 (t), p 2 (t), ..., p n (t),

for any t giving a total of one:.

Let us now pose the following question: what will happen to the system S at t® ¥? Will the functions p 1 (t), p 2 (t),…, p n (t) tend to some limits? These limits, if they exist, are called the limiting (or "final") probabilities of the states.

The following general position can be proved. If the number of states of the system S is finite and it is possible to pass from each state (in one or another number of steps) to each other, then the limiting probabilities of the states exist and do not depend on the initial state of the system .

In fig. 24 shows a graph of states that satisfies the stated condition: from any state, the system can sooner or later go to any other. On the contrary, for a system whose state graph is shown in Fig. 25, the condition is not met. Obviously, if the initial state of such a system is S 1, then, for example, the state S 6 at t® ¥ can be achieved, but if the initial state S 2 it cannot.

Suppose that the stated condition is satisfied, and the limiting probabilities exist:



(i = 1, 2, ..., n). (6.1)

We will denote the limiting probabilities by the same letters p 1, p 2, ... p n as the probabilities of states themselves, meaning that this time raise not variable quantities (functions of time), but constant numbers.

Obviously, the limiting probabilities of the state, as well as the prelimiting ones, should add up to unity:

Thus, for t® ¥ in the system S a certain limiting stationary regime is established: it consists in the fact that the system randomly changes its states, but the probability of each of them no longer depends on time: each of the states occurs with a certain constant probability. What is the meaning of this probability? It is nothing more than the average relative residence time of the system in a given state. For example, if the system has S three possible states: S 1, S 2 and S 3, and their limiting probabilities are equal to 0.2, 0.3 and 0.5, this means that after the transition to a steady state, the system S on average, two tenths of the time will be in the S 1 state, three tenths - in the S 2 state and half the time - in the S 3 state. The question arises: how to calculate the limiting probabilities of states p 1, p 2, ... p n?

It turns out that for this, in the system of Kolmogorov equations describing the probabilities of states, it is necessary to set all left-hand sides (derivatives) equal to zero.

Indeed, in the limiting (steady state) regime, all state probabilities are constant, which means that their derivatives are equal to zero.

If all the left-hand sides of the Kolmogorov equations for the probabilities of states are set to be different to zero, then the system of differential equations will turn into a system of linear algebraic equations. Together with the condition

(the so-called "normalization condition") these equations make it possible to calculate all limiting probabilities

p 1, p 2, ... p n

Example 1... The physical system S has possible states: S l, S 2, S 3, S 4, the labeled graph of which is given in Fig. 26 (each arrow has a numerical value of the corresponding intensity). Calculate the limiting probabilities of states: p 1, p 2, p 3, p 4.

Decision... We write the Kolmogorov equations for the probabilities of states:

(6.3)

Assuming the left-hand sides equal to zero, we obtain a system of algebraic equations for the limiting probabilities of states:

(6.4)

Equations (6.4) are the so-called homogeneous equations (without a free term). As is known from algebra, these equations determine the values ​​p 1, p 2, p 3, p 4 only up to a constant factor. Fortunately, we have a normalization condition:

p 1 + p 2 + p 3 + p 4 = 1, (6.5)

which, together with equations (64), makes it possible to find all unknown probabilities.

Indeed, we express from (6.4) all unknown probabilities in terms of one of them, for example, in terms of p 1. From the first equation:

p 3 = 5p 1

Substituting into the second equation, we get:

p 2 = 2 p 1 + 2p 3 = 12 p 1.

The fourth equation gives:

p 4 = 1 / 2p 2 = 6 p 1.

Substituting all these expressions instead of p 2, p 3, p 4 in the normalization condition (6.5), we obtain

p 1 + 12p 1 + 5 p 1 + 6 p 1 = 1.

24 p 1 = 1, p 1 = 1/24, p 2 = 12p 1 = 1/2.

p 3 = 5p 1 = 5/24. p 4 = 6 p 1 = 1/4.

Thus, the limiting probabilities of states are obtained, they are equal;

p 1 = 1/24, p 2 = 1/2, p 3 = 5/24, p 4 = 1/4 (6.6)

This means that in the limiting steady state the system S will spend in the S 1 state on average one twenty-fourth of the time, in the S 2 state - half the time, in the S 3 state - five twenty-fourths, and in the S 4 state - one quarter of the time.

Note that in solving this problem, we did not use at all one of the equations (6.4) - the third. It is easy to see that it is a consequence of the other three: adding all four equations, we get the identical zero. With equal success, solving the system, we could discard any of the four equations (6.4).

The method we used to compose algebraic equations for the limiting probabilities of states was reduced to the following: first write the differential equations, and then put the left sides in them equal to zero.However, it is possible to write algebraic equations for the limiting probabilities directly, without going through the differential stage. Let us illustrate this with an example.

Example 2... The system state graph is shown in Fig. 27. Write algebraic equations for the limiting probabilities of states.

Decision... Without writing down the differential equations, we directly write down the corresponding right-hand sides and equate them to zero; in order not to deal with negative terms, we immediately transfer them to another part, changing the sign:

(6.7)

In order to immediately write such equations in the future, it is useful to remember the following mnemonic rule: “what flows in, then flows out,” that is, for each state, the sum of the terms corresponding to the incoming arrows is equal to the sum of the terms corresponding to the outgoing ones; each term is equal to the intensity of the flow of events moving the system along the given arrow, multiplied by the probability of the state from which the arrow leaves.

In what follows, in all cases, we will use precisely this shortest way of writing equations for the limiting probabilities.

Example 3... Write algebraic equations for the limiting probabilities of the states of the system S, the state graph of which is given in Fig. 28. Solve these equations.

Decision. We write algebraic equations for the limiting probabilities of states;

Normalizing condition;

p 1 + p 2 + p 3 = 1. (6.9)

Let us express, using the first two equations (6.8), p 2 and p 3 in terms of p 1:

Let us substitute them into the normalization condition (6.9):

,

from where .

; .