Applied Probability by Paul E Pfeiffer - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

Chapter 1Probability Systems

1.1Likelihood*

Introduction

Probability models and techniques permeate many important areas of modern life. A variety of types of random processes, reliability models and techniques, and statistical considerations in experimental work play a significant role in engineering and the physical sciences. The solutions of management decision problems use as aids decision analysis, waiting line theory, inventory theory, time series, cost analysis under uncertainty — all rooted in applied probability theory. Methods of statistical analysis employ probability analysis as an underlying discipline.

Modern probability developments are increasingly sophisticated mathematically. To utilize these, the practitioner needs a sound conceptual basis which, fortunately, can be attained at a moderate level of mathematical sophistication. There is need to develop a feel for the structure of the underlying mathematical model, for the role of various types of assumptions, and for the principal strategies of problem formulation and solution.

Probability has roots that extend far back into antiquity. The notion of “chance” played a central role in the ubiquitous practice of gambling. But chance acts were often related to magic or religion. For example, there are numerous instances in the Hebrew Bible in which decisions were made “by lot” or some other chance mechanism, with the understanding that the outcome was determined by the will of God. In the New Testament, the book of Acts describes the selection of a successor to Judas Iscariot as one of “the Twelve.” Two names, Joseph Barsabbas and Matthias, were put forward. The group prayed, then drew lots, which fell on Matthias.

Early developments of probability as a mathematical discipline, freeing it from its religious and magical overtones, came as a response to questions about games of chance played repeatedly. The mathematical formulation owes much to the work of Pierre de Fermat and Blaise Pascal in the seventeenth century. The game is described in terms of a well defined trial (a play); the result of any trial is one of a specific set of distinguishable outcomes. Although the result of any play is not predictable, certain “statistical regularities” of results are observed. The possible results are described in ways that make each result seem equally likely. If there are N such possible “equally likely” results, each is assigned a probability 1/N.

The developers of mathematical probability also took cues from early work on the analysis of statistical data. The pioneering work of John Graunt in the seventeenth century was directed to the study of “vital statistics,” such as records of births, deaths, and various diseases. Graunt determined the fractions of people in London who died from various diseases during a period in the early seventeenth century. Some thirty years later, in 1693, Edmond Halley (for whom the comet is named) published the first life insurance tables. To apply these results, one considers the selection of a member of the population on a chance basis. One then assigns the probability that such a person will have a given disease. The trial here is the selection of a person, but the interest is in certain characteristics. We may speak of the event that the person selected will die of a certain disease– say “consumption.” Although it is a person who is selected, it is death from consumption which is of interest. Out of this statistical formulation came an interest not only in probabilities as fractions or relative frequencies but also in averages or expectatons. These averages play an essential role in modern probability.

We do not attempt to trace this history, which was long and halting, though marked by flashes of brilliance. Certain concepts and patterns which emerged from experience and intuition called for clarification. We move rather directly to the mathematical formulation (the “mathematical model”) which has most successfully captured these essential ideas. This is the model, rooted in the mathematical system known as measure theory, is called the Kolmogorov model, after the brilliant Russian mathematician A.N. Kolmogorov (1903-1987). Kolmogorov succeeded in bringing together various developments begun at the turn of the century, principally in the work of E. Borel and H. Lebesgue on measure theory. Kolmogorov published his epochal work in German in 1933. It was translated into English and published in 1956 by Chelsea Publishing Company.

Outcomes and events

Probability applies to situations in which there is a well defined trial whose possible outcomes are found among those in a given basic set. The following are typical.

  • A pair of dice is rolled; the outcome is viewed in terms of the numbers of spots appearing on the top faces of the two dice. If the outcome is viewed as an ordered pair, there are thirty six equally likely outcomes. If the outcome is characterized by the total number of spots on the two die, then there are eleven possible outcomes (not equally likely).

  • A poll of a voting population is taken. Outcomes are characterized by responses to a question. For example, the responses may be categorized as positive (or favorable), negative (or unfavorable), or uncertain (or no opinion).

  • A measurement is made. The outcome is described by a number representing the magnitude of the quantity in appropriate units. In some cases, the possible values fall among a finite set of integers. In other cases, the possible values may be any real number (usually in some specified interval).

  • Much more sophisticated notions of outcomes are encountered in modern theory. For example, in communication or control theory, a communication system experiences only one signal stream in its life. But a communication system is not designed for a single signal stream. It is designed for one of an infinite set of possible signals. The likelihood of encountering a certain kind of signal is important in the design. Such signals constitute a subset of the larger set of all possible signals.

These considerations show that our probability model must deal with

  • A trial which results in (selects) an outcome from a set of conceptually possible outcomes. The trial is not successfully completed until one of the outcomes is realized.

  • Associated with each outcome is a certain characteristic (or combination of characteristics) pertinent to the problem at hand. In polling for political opinions, it is a person who is selected. That person has many features and characteristics (race, age, gender, occupation, religious preference, preferences for food, etc.). But the primary feature, which characterizes the outcome, is the political opinion on the question asked. Of course, some of the other features may be of interest for analysis of the poll.

Inherent in informal thought, as well as in precise analysis, is the notion of an event to which a probability may be assigned as a measure of the likelihood the event will occur on any trial. A successful mathematical model must formulate these notions with precision. An event is identified in terms of the characteristic of the outcome observed. The event “a favorable response” to a polling question occurs if the outcome observed has that characteristic; i.e., iff (if and only if) the respondent replies in the affirmative. A hand of five cards is drawn. The event “one or more aces” occurs iff the hand actually drawn has at least one ace. If that same hand has two cards of the suit of clubs, then the event “two clubs” has occurred. These considerations lead to the following definition.

Definition. The event determined by some characteristic of the possible outcomes is the set of those outcomes having this characteristic. The event occurs iff the outcome of the trial is a member of that set (i.e., has the characteristic determining the event).

  • The event of throwing a “seven” with a pair of dice (which we call the event SEVEN) consists of the set of those possible outcomes with a total of seven spots turned up. The event SEVEN occurs iff the outcome is one of those combinations with a total of seven spots (i.e., belongs to the event SEVEN). This could be represented as follows. Suppose the two dice are distinguished (say by color) and a picture is taken of each of the thirty six possible combinations. On the back of each picture, write the number of spots. Now the event SEVEN consists of the set of all those pictures with seven on the back. Throwing the dice is equivalent to selecting randomly one of the thirty six pictures. The event SEVEN occurs iff the picture selected is one of the set of those pictures with seven on the back.

  • Observing for a very long (theoretically infinite) time the signal passing through a communication channel is equivalent to selecting one of the conceptually possible signals. Now such signals have many characteristics: the maximum peak value, the frequency spectrum, the degree of differentibility, the average value over a given time period, etc. If the signal has a peak absolute value less than ten volts, a frequency spectrum essentially limited from 60 herz to 10,000 herz, with peak rate of change 10,000 volts per second, then it is one of the set of signals with those characteristics. The event "the signal has these characteristics" has occured. This set (event) consists of an uncountable infinity of such signals.

One of the advantages of this formulation of an event as a subset of the basic set of possible outcomes is that we can use elementary set theory as an aid to formulation. And tools, such as Venn diagrams and indicator functions for studying event combinations, provide powerful aids to establishing and visualizing relationships between events. We formalize these ideas as follows:

  • Let Ω be the set of all possible outcomes of the basic trial or experiment. We call this the basic space or the sure event, since if the trial is carried out successfully the outcome will be in Ω ; hence, the event Ω is sure to occur on any trial. We must specify unambiguously what outcomes are “possible.” In flipping a coin, the only accepted outcomes are “heads” and “tails.” Should the coin stand on its edge, say by leaning against a wall, we would ordinarily consider that to be the result of an improper trial.

  • As we note above, each outcome may have several characteristics which are the basis for describing events. Suppose we are drawing a single card from an ordinary deck of playing cards. Each card is characterized by a “face value” (two through ten, jack, queen, king, ace) and a “suit” (clubs, hearts, diamonds, spades). An ace is drawn (the event ACE occurs) iff the outcome (card) belongs to the set (event) of four cards with ace as face value. A heart is drawn iff the card belongs to the set of thirteen cards with heart as suit. Now it may be desirable to specify events which involve various logical combinations of the characteristics. Thus, we may be interested in the event the face value is jack or king and the suit is heart or spade. The set for jack or king is represented by the union JK and the set for heart or spade is the union HS. The occurrence of both conditions means the outcome is in the intersection (common part) designated by . Thus the event referred to is

    (1.1)E=(JK)∩(HS)

    The notation of set theory thus makes possible a precise formulation of the event E .

  • Sometimes we are interested in the situation in which the outcome does not have one of the characteristics. Thus the set of cards which does not have suit heart is the set of all those outcomes not in event H . In set theory, this is the complementary set (event) Hc.

  • Events are mutually exclusive iff not more than one can occur on any trial. This is the condition that the sets representing the events are disjoint (i.e., have no members in common).

  • The notion of the impossible event is useful. The impossible event is, in set terminology, the empty set . Event cannot occur, since it has no members (contains no outcomes). One use of is to provide a simple way of indicating that two sets are mutually exclusive. To say AB= (here we use the alternate AB for AB) is to assert that events A and B have no outcome in common, hence cannot both occur on any given trial.

  • Set inclusion provides a convenient way to designate the fact that event A implies event B , in the sense that the occurrence of A requires the occurrence of B . The set relation AB signifies that every element (outcome) in A is also in B . If a trial results in an outcome in A (event A occurs), then that outcome is also in B (so that event B has occurred).

The language and notaton of sets provide a precise language and notation for events and their combinations. We collect below some useful facts about logical (often called Boolean) combinations of events (as sets). The notion of Boolean combinations may be applied to arbitrary classes of sets. For this reason, it is sometimes useful to use an index set to designate membership. We say the index J is countable if it is finite or countably infinite; otherwise it is uncountable. In the following it may be arbitrary.

(1.2)
_autogen-svg2png-0031.png

For example, if _autogen-svg2png-0032.png then _autogen-svg2png-0033.png is the class _autogen-svg2png-0034.png, and

(1.3)
_autogen-svg2png-0035.png

If _autogen-svg2png-0036.png then _autogen-svg2png-0037.png is the sequence _autogen-svg2png-0038.png. and

(1.4)
_autogen-svg2png-0039.png

If event E is the union of a class of events, then event E occurs iff at least one event in the class occurs. If F is the intersection of a class of events, then event F occurs iff all events in the class occur on the trial.

The role of disjoint unions is so important in probability that it is useful to have a symbol indicating the union of a disjoint class. We use the big V to indicate that the sets combined in the union are disjoint. Thus, for example, we write

(1.5)
_autogen-svg2png-0040.png
Example 1.1Events derived from a class

Consider the class _autogen-svg2png-0041.png of events. Let Ak be the event that exactly k occur on a trial and Bk be the event that k or more occur on a trial. Then

(1.6)
_autogen-svg2png-0042.png

The unions are disjoint since each pair of terms has Ei in one and Eic in the other, for at least one i. Now the Bk can be expressed in terms of the Ak. For example

(1.7) B2 = A2A3

The union in this expression for B2 is disjoint since we cannot have exactly two of the Ei occur and exactly three of them occur on the same trial. We may express B2 directly in terms of the Ei as follows:

(1.8) B2 = E1E2E1E3E2E3

Here the union is not disjoint, in general. However, if one pair, say _autogen-svg2png-0045.png is disjoint, then E1E3= and the pair _autogen-svg2png-0047.png is disjoint (draw a Venn diagram). Suppose C is the event the first two occur or the last two occur but no other combination. Then

(1.9) C = E1E2E3cE1cE2E3

Let D be the event that one or three of the events occur.

(1.10) D = A1A3 = E1E2cE3cE1cE2E3cE1cE2cE3E1E2E3

Two important patterns in set theory known as DeMorgan's rules are useful in the handling of events. For an arbitrary class _autogen-svg2png-0050.png of events,

(1.11)
_autogen-svg2png-0051.png

An outcome is not in the union (i.e., not in at least one) of the Ai iff it fails to be in all Ai, and it is not in the intersection (i.e. not in all) iff it fails to be in at least one of the Ai.

Example 1.2Continuation of Example 1.1

Express the event of no more than one occurrence of the events in _autogen-svg2png-0052.png as B2c.

(1.12)
_autogen-svg2png-0053.png

The last expression shows that not more than one of the Ei occurs iff at least two of them fail to occur.

1.2Probability Systems*

Probability measures

In the module "Likelih