next up previous
Next: Continuous Time Markov Chains Up: Discrete Time Markov Chains Previous: Transition Probabilities

Long-Run Behavior of Markov Chains

As the time index $m$ approaches infinity, a Markov chain may settle down and exhibit steady-state behavior. If the following limit exists:

\begin{displaymath}
\lim_{m \rightarrow \infty} p^{(m)}_{ij} = \pi_{j}
\end{displaymath}

for all values of $i$, then the $\{\pi_{j}\}$ are the limiting or steady-state probabilities.

Looking at the state probability as $m$ approaches infinity, we see that:

$\displaystyle \lim_{m \rightarrow \infty} \pi_{j}(m)$ $\textstyle =$ $\displaystyle \lim_{m \rightarrow \infty}
\sum_{i} \pi_{i}(0)p_{ij}(m)$ (1)
  $\textstyle =$ $\displaystyle \sum_{i} \pi_{i}(0) \lim_{m \rightarrow \infty} p_{ij}(m)$  
  $\textstyle =$ $\displaystyle \sum_{i} \pi_{i}(0) \pi_{j}$  
  $\textstyle =$ $\displaystyle \pi_{j} \sum_{i} \pi_{i}(0)$  
  $\textstyle =$ $\displaystyle \pi_{j}$  

When the limiting probabilities exist, then can be found using the following equations:

\begin{displaymath}
\Pi = \Pi P
\end{displaymath}

and

\begin{displaymath}
\sum_{i} \pi_{i} = 1
\end{displaymath}

where

\begin{displaymath}
\Pi =
\left[
\begin{array}{cccc} \pi_{0} & \pi_{1} & \pi_{2} & \cdots \end{array} \right]
\end{displaymath}