Applied Finite Mathematics by Rupinder Sekhon, UniqU, LLC - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

Chapter 17Markov Chains

Chapter Overview

In this chapter, you will learn to:

  1. Write transition matrices for Markov Chain problems.

  2. Find the long term trend for a Regular Markov Chain.

  3. Solve and interpret Absorbing Markov Chains.

Markov Chains

We will now study stochastic processes, experiments in which the outcomes of events depend on the previous outcomes. Such a process or experiment is called a Markov Chain or Markov process. The process was first studied by a Russian mathematician named Andrei A. Markov in the early 1900s.

A small town is served by two telephone companies, Mama Bell and Papa Bell. Due to their aggressive sales tactics, each month 40% of Mama Bell customers switch to Papa Bell, that is, the other 60% stay with Mama Bell. On the other hand, 30% of the Papa Bell customers switch to Mama Bell. The above information can be expressed in a matrix which lists the probabilities of going from one state into another state. This matrix is called a transition matrix.

This matrix depict the flow of customers from mama bell to papa bell and vice versa.
Figure 17.1

The reader should observe that a transition matrix is always a square matrix because all possible states must have both rows and columns. All entries in a transition matrix are non-negative as they represent probabilities. Furthermore, since all possible outcomes are considered in the Markov process, the sum of the row entries is always 1.

Example 17.1

Professor Symons either walks to school, or he rides his bicycle. If he walks to school one day, then the next day, he will walk or cycle with equal probability. But if he bicycles one day, then the probability that he will walk the next day is 1/4. Express this information in a transition matrix.

We obtain the following transition matrix by properly placing the row and column entries. Note that if, for example, Professor Symons bicycles one day, then the probability that he will walk the next day is 1/4, and therefore, the probability that he will bicycle the next day is 3/4.

This matrix shows the probability that Professor Symons will bicycle or walk to work.
Figure 17.1

Example 17.2

In Example 17.1, if it is assumed that the first day is Monday, write a matrix that gives probabilities of a transition from Monday to Wednesday.

Let W denote that Professor Symons walks and B denote that he rides his bicycle.

We use the following tree diagram to compute the probabilities.

The Tree diagram depicts the probability that Professor Symons will either walk or bicycle to work on Monday, Tuesday, or Wednesday.
Figure 17.1

The probability that Professor Symons walked on Wednesday given that he walked on Monday can be found from the tree diagram, as listed below.

_autogen-svg2png-0006.png.

_autogen-svg2png-0007.png.

_autogen-svg2png-0008.png.

_autogen-svg2png-0009.png.

We represent the results in the following matrix.

This matrix depicts the probability that Professor Symons will walk or bicycle to work on Monday or Wednesday.
Figure 17.1

Alternately, this result can be obtained by squaring the original transition matrix.

We list both the original transition matrix T and T2 as follows:

(17.1)
_autogen-svg2png-0012.png
(17.2)
_autogen-svg2png-0013.png

The reader should compare this result with the probabilities obtained from the tree diagram.

Consider the following case, for example,

(17.3)
_autogen-svg2png-0014.png

It makes sense because to find the probability that Professor Symons will walk on Wednesday given that he bicycled on Monday, we sum the probabilities of all paths that begin with B and end in W. There are two such paths, and they are _autogen-svg2png-0017.png and _autogen-svg2png-0018.png.

Example 17.3

The transition matrix for Example 17.1 is given below.

This matrix depicts the probability that Professor Symons will walk or bicycle to work on Monday or Tuesday.
Figure 17.1

Write the transition matrix from a) Monday to Thursday, b) Monday to Friday.

In writing a transition matrix from Monday to Thursday, we are moving from one state to another in three steps. That is, we need to compute T3.

(17.4)
_autogen-svg2png-0020.png

b) To find the transition matrix from Monday to Friday, we are moving from one state to another in 4 steps. Therefore, we compute T4.

(17.5)
_autogen-svg2png-0022.png

It is important that the student is able to interpret the above matrix correctly. For example, the entry 85/128, states that if Professor Symons walked to school on Monday, then there is 85/128 probability that he will bicycle to school on Friday.

There are certain Markov chains that tend to stabilize in the long run, and they are the subject of the section called “Regular Markov Chains”. It so happens that the transition matrix we have used in all the above examples is just such a Markov chain. The next example deals with the long term trend or steady-state situation for that matrix.

Example 17.4

Suppose Professor Symons continues to walk and bicycle according to the transition matrix given in Example 17.1. In the long run, how often will he walk to school, and how often will he bicycle?

As mentioned earlier, as we take higher and higher powers of our matrix T, it should stabilize.

(17.6)
_autogen-svg2png-0026.png
(17.7)
_autogen-svg2png-0027.png
(17.8)
_autogen-svg2png-0028.png

Therefore, in the long run, Professor Symons will walk to school 1/3 of the time and bicycle 2/3 of the time.

When this happens, we say that the system is in steady-state or state of equilibrium. In this situation, all row vectors are equal. If the original matrix is an n by n matrix, we get n vectors that are all the same. We call this vector a fixed probability vector or the equilibrium vector E. In the above problem, the fixed probability vector E is _autogen-svg2png-0035.png. Furthermore, if the equilibrium vector E is multiplied by the original matrix T, the result is the equilibrium vector E. That is,

(17.9)
_autogen-svg2png-0039.png

or,

(17.10)
_autogen-svg2png-0040.png

Regular Markov Chains

At the end of the section called “Markov Chains”, we took the transition matrix T and started taking higher and higher powers of it. The matrix started to stabilize, and finally it reached its steady-state or state of equilibrium. When that happened, all the row vectors became the same, and we called one such row vector a fixed probability vector or an equilibrium vector E. Furthermore, we discovered that _autogen-svg2png-0043.png.

Section Overview

In this section, we wish to answer the following four questions.

  1. Does every Markov chain reach a state of equilibrium?

  2. Does the product of an equilibrium vector and its transition matrix always equal the equilibrium vector? That is, does _autogen-svg2png-0044.png?

  3. Can the equilibrium vector E be found without raising the matrix to higher powers?

  4. Does the long term market share distribution for a Markov chain depend on the initial market share?

Example 17.5

Does every Markov chain reach the state of equilibrium?

Answer: A Markov chain reaches a state of equilibrium if it is a regular Markov chain. A Markov chain is said to be a regular Markov chain if some power of it has only positive entries.
Example 17.6

Determine whether the following Markov chains are regular.

  1. _autogen-svg2png-0046.png

  2. _autogen-svg2png-0047.png

  1. The matrix A is not a regular Markov chain because every power of it has an entry 0 in the first row, second column position. In fact, we will show that all 2 by 2 matrices that have a zero in the first row, second column position are not regular. Consider the following matrix M.

    (17.11)
    _autogen-svg2png-0050.png
    (17.12)
    _autogen-svg2png-0051.png

    Observe that the first row, second column entry, a⋅0+0⋅c, will always be zero, regardless of what power we raise the matrix to.

  2. The transition matrix B is a regular Markov chain because

    (17.13)
    _autogen-svg2png-0054.png

    has only positive entries.

Example 17.7

Does the product of an equilibrium vector and its transition matrix always equal the equilibrium vector? That is, does _autogen-svg2png-0055.png?

At this point, the reader may have already guessed that the answer is yes if the transition matrix is a regular Markov chain. We try to illustrate with the following example from the section called “Markov Chains”.

A small town is served by two telephone companies, Mama Bell and Papa Bell. Due to their aggressive sales tactics, each month 40% of Mama Bell customers switch to Papa Bell, that is, the other 60% stay with Mama Bell. On the other hand, 30% of the Papa Bell customers switch to Mama Bell. The transition matrix is given below.

This matrix depict the flow of customers from mama bell to papa bell and vice versa.
Figure 17.1

If the initial market share for Mama Bell is 20% and for Papa Bell 80%, we'd like to know the long term market share for each company.

Let matrix T denote the transition matrix for this Markov chain, and M denote the matrix that represents the initial market share. Then T and M are as follows:

_autogen-svg2png-0060.png and _autogen-svg2png-0061.png

Since each month the towns people switch according to the transition matrix T, after one month the distribution for each company is as follows:

(17.14)
_autogen-svg2png-0063.png

After two months, the market share for each company is

(17.15)
_autogen-svg2png-0064.png

After three months the distribution is

(17.16)
_autogen-svg2png-0065.png

After four months the market share is

(17.17)
_autogen-svg2png-0066.png

After 30 months the market share is _autogen-svg2png-0067.png .

The market share after 30 months has stabilized to _autogen-svg2png-0068.png .

This means that

(17.18)
_autogen-svg2png-0069.png

Once the market share reaches an equilibrium state, it stays the same, that is, _autogen-svg2png-0070.png.

This helps us answer the next question.

Example 17.8

Can the equilibrium vector E be found without raising the transition matrix to large powers?

The answer to the second question provides us with a way to find the equilibrium vector E.

The answer lies in the fact that