Quantitative Analysis for Management

July 31, 2017 | Author: Accesorios Blanca | Category: Standard Deviation, Normal Distribution, Mean, Production And Manufacturing, Quality (Business)
Share Embed Donate


Short Description

Inventory Management...

Description

S e v e n t h

Quantitative Analysis for Management Barry Render Ralph M. Stair, Jr. MODULES M1 M2 M3 M4 M5 M6

Statistical Quality Control Dynamic Programming Decision Theory and the Normal Distribution Material Requirements and Just-in-Time Inventory Mathematical Tools: Determinants and Matrices The Binomial Distribution

E d i t i o n

M O D U L E

1

Statistical Quality Control

LEARNING OBJECTIVES After completing this module, students will be able to: 1. Define the quality of a product or service. 2. Develop four types of control charts: x , R, p, and c. 3. Understand the basic theoretical underpinnings of statistical quality control, including the central limit theorem.

4. Know if a process is in control or not.

MODULE OUTLINE M1.1 M1.2 M1.3 M1.4 M1.5

Introduction Defining Quality and TQM Statistical Process Control Control Charts for Variables Control Charts for Attributes Summary • Glossary • Key Equations • Solved Problems • Self-Test • Discussion Questions and Problems • Case Study: Bayfield Mud Company • Case Study: Morristown Daily Tribune • Bibliography

Appendix M1.1: Using QM for Windows for SPC

M1-1

M1-2

Module 1

STATISTICAL QUALITY CONTROL

M1.1 INTRODUCTION ●

Statistical process control uses statistical and probability tools to help control processes and produce consistent goods and services.

The quality of a product or service is the degree to which the product or service meets specifications.

Total Quality Management encompasses the whole organization.

For almost every product or service, there is more than one organization trying to make a sale. Price may be a major issue in whether a sale is made or lost, but another factor is quality. In fact, quality is often the major issue; and poor quality can be very expensive for both the producing firm and the customer. Consequently, firms employ quality management tactics. Quality management, or as it is more commonly called, quality control (QC), is critical throughout the organization. One of the manager’s major roles is to ensure that his or her firm can deliver a quality product at the right place, at the right time, and at the right price. Quality is not just of concern for manufactured products either; it is also important in services, from banking to hospital care to education. We begin this module with an attempt to define just what quality really is. Then we deal with the most important statistical methodology for quality management: statistical process control (SPC). SPC is the application of the statistical tools we discussed in Chapter 2 to the control of processes that result in products or services.

M1.2 DEFINING QUALITY AND TQM ● To some people, a high-quality product is one that is stronger, will last longer, is built heavier, and is, in general, more durable than other products. In some cases this is a good definition of a quality product, but not always. A good circuit breaker, for example, is not one that lasts longer during periods of high current or voltage. So the quality of a product or service is the degree to which the product or service meets specifications. Increasingly, definitions of quality include an added emphasis on meeting the customer’s needs. As you can see in Table M1.1, the first and second ones are similar to our definition. Total quality management (TQM) refers to a quality emphasis that encompasses the entire organization, from supplier to customer. TQM emphasizes a commitment by man-

T A B L E M 1 . 1 Several Definitions of Quality “Quality is the degree to which a specific product conforms to a design or specification.” H. L. Gilmore, “Product Conformance Cost,” Quality Progress (June 1974): 16. “Quality is the totality of features and characteristics of a product or service that bears on its ability to satisfy stated or implied needs.” Ross Johnson and William O. Winchell, Production and Quality, Milwaukee, WI: American Society of Quality Control, 1989, p. 2. “Quality is fitness for use.” J. M. Juran, ed., Quality Control Handbook, 3rd ed., New York: McGraw-Hill, 1974, p. 2. “Quality is defined by the customer; customers want products and services that, throughout their lives, meet customers’ needs and expectations at a cost that represents value.” Ford’s definition as presented in William W. Scherkenbach, Deming’s Road to Continual Improvement, Knoxville, TN: SPC Press, 1991, p. 161. “Even though quality cannot be defined, you know what it is.” R. M. Pirsig, Zen and the Art of Motorcycle Maintenance, New York: Bantam Books, 1974, p. 213.

M1.3 Statistical Process Control

HISTORY

M1-3

How Quality Control Has Evolved

In the early nineteenth century an individual skilled artisan started and finished a whole product. With the Industrial Revolution and the factory system, semiskilled workers, each making a small portion of the final product, became common. With this, responsibility for the quality of the final product tended to shift to supervisors, and pride of workmanship declined. As organizations became larger in the twentieth century, inspection became more technical and organized. Inspectors were often grouped together; their job was to make sure that bad lots were not shipped to customers. Starting in the 1920s, major statistical QC tools were developed. W. Shewhart introduced control charts in 1924, and in 1930 H. F. Dodge and H. G. Romig designed acceptance sampling tables. Also at that time the important role of quality control in all areas of the company’s performance became recognized.

During and after World War II, the importance of quality grew, often with the encouragement of the U.S. government. Companies recognized that more than just inspection was needed to make a quality product. Quality needed to be built into the production process. After World War II, an American, W. Edwards Deming, went to Japan to teach statistical quality control concepts to the devastated Japanese manufacturing sector. A second pioneer, J. M. Juran, followed Deming to Japan, stressing top management support and involvement in the quality battle. In 1961 A. V. Feigenbaum wrote his classic book Total Quality Control, which delivered a fundamental message: Make it right the first time! In 1979 Philip Crosby published Quality Is Free, stressing the need for management and employee commitment to the battle against poor quality. In 1988, the U.S. government presented its first awards for quality achievement. These are known as the Malcolm Baldrige National Quality Awards.

agement to have a companywide drive toward excellence in all aspects of the products and services that are important to the customer. Meeting the customer’s expectations requires an emphasis on TQM if the firm is to compete as a leader in world markets.

M1.3 STATISTICAL PROCESS CONTROL ● Statistical process control (SPC) is concerned with establishing standards, monitoring standards, making measurements, and taking corrective action as a product or service is being produced. Samples of process outputs are examined; if they are within acceptable limits, the process is permitted to continue. If they fall outside certain specific ranges, the process is stopped and, typically, the assignable cause is located and removed. Control charts are graphs that show upper and lower limits for the process we want to control. A control chart is a graphic presentation of data over time. Control charts are constructed in such a way that new data can quickly be compared to past performance. Upper and lower limits in a control chart can be in units of temperature, pressure, weight, length, and so on. We take samples of the process output and plot the average of these samples on a chart that has the limits on it. Figure M1.1 graphically reveals the useful information that can be portrayed in control charts. When the average of the samples falls within the upper and lower control limits and no discernible pattern is present, the process is said to be in control; otherwise, the process is out of control or out of adjustment.

Variability in the Process All processes are subject to a certain degree of variability. Walter Shewhart of Bell Laboratories, while studying process data in the 1920s, made the distinction between the common and special causes of variation. The key is keeping variations under control. So we now look at how to build control charts that help managers and workers develop a process that is capable of producing within established limits.

SPC helps set standards. It can also monitor, measure, and correct quality problems.

A control chart is a graphic way of presenting data over time.

M1-4

Module 1

FIGURE M1.1 Patterns to Look for on Control Charts (Source: Bertrand L. Hansen, Quality Control: Theory and Applications, © 1963, renewed 1991, p. 65. Reprinted by permission of Prentice Hall, Upper Saddle River, NJ.)

STATISTICAL QUALITY CONTROL

Upper control limit

Target

Lower control limit Normal behavior.

One plot out above. Investigate for cause.

One plot out below. Investigate for cause.

Two plots near upper control. Investigate for cause.

Two plots near lower control. Investigate for cause.

Run of 5 above central line. Investigate for cause.

Upper control limit

Target

Lower control limit

Upper control limit

Target

Lower control limit Run of 5 below central line. Trends in either direction 5 Investigate for cause. plots. Investigate for cause of progressive change.

Erratic behavior. Investigate.

Building Control Charts When building control charts, averages of small samples (often of five items or parts) are used, as opposed to data on individual parts. Individual pieces tend to be too erratic to make trends quickly visible. The purpose of control charts is to help distinguish between natural variations and variations due to assignable causes.

IN ACTION

Statistical Process Control Helps Du Pont and the Environment

Du Pont has found that statistical process control (SPC) is an excellent approach to solving environmental problems. With a goal of slashing manufacturing waste and hazardous waste disposals by 35%, Du Pont brought together information from its quality control systems and its material management databases. Diagrams and charts examining causes and effects revealed where major problems occurred. Then the company began reducing waste materials through improved SPC standards for production. Tying together shop-floor informationbased monitoring systems with air-quality standards, Du Pont identified ways to reduce emissions. Using a vendor evaluation system linked to just-in-time purchasing requirements, the company initiated controls over incoming hazardous materials.

Du Pont now saves more than 15 million pounds of plastics annually by recycling them into products rather than dumping them into landfills. Through electronic purchasing the firm has reduced wastepaper to a trickle and, by using new packaging designs, has cut in-process material wastes by nearly 40%. By integrating SPC with environmental compliance activities, Du Pont has made major quality improvements that far exceed regulatory guidelines, and at the same time the company has realized huge cost savings. Sources: Automotive Industries (June 1996): 93; and E. E. Dwinells and J. P. Sheffer, APICS—The Performance Advantage (March 1992): 30–31.

M1.4 Control Charts for Variables

Natural Variations Natural variations affect almost every production process and are to be expected. Natural variations are the many sources of variation within a process that is in statistical control. They behave like a constant system of chance causes. Although individual measured values are all different, as a group they form a pattern that can be described as a distribution. When these distributions are normal, they are characterized by two parameters. These parameters are

M1-5

Natural variations are sources of variation in a process that is statistically in control.

1. Mean,  (the measure of central tendency, in this case, the average value) 2. Standard deviation,  (variation, the amount by which the smaller values differ from the larger ones) As long as the distribution (output precision) remains within specified limits, the process is said to be “in control,” and the modest variations are tolerated. Assignable Variations When a process is not in control, we must detect and eliminate special (assignable) causes of variation. Then its performance is predictable, and its ability to meet customer expectations can be assessed. The ability of a process to operate within statistical control is determined by the total variation that comes from natural causes—the minimum variation that can be achieved after all assignable causes have been eliminated. The objective of a process control system, then, is to provide a statistical signal when assignable causes of variation are present. Such a signal can quicken appropriate action to eliminate assignable causes. Assignable variation in a process can be traced to a specific reason. Factors such as machine wear, misadjusted equipment, fatigued or untrained workers, or new batches of raw material are all potential sources of assignable variations. Control charts such as those illustrated in Figure M1.1 help the manager pinpont where a problem may lie.

Assignable variations in a process can be traced to a specific problem.

M1.4 CONTROL CHARTS FOR VARIABLES ● Control charts for the mean, x, and the range, R, are used to monitor processes that are measured in continuous units. The x- (x-bar) chart tells us whether changes have occurred in the central tendency of a process. This might be due to such factors as tool wear, a gradual increase in temperature, a different method used on the second shift, or new and stronger materials. The R-chart values indicate that a gain or loss in uniformity has occurred. Such a change might be due to worn bearings, a loose tool part, an erratic flow of lubricants to a machine, or to sloppiness on the part of a machine operator. The two types of charts go hand in hand when monitoring variables.

x-charts measure central tendency of a process.

R-charts measure the range between the biggest (or heaviest) and smallest (or lightest) items in a random sample.

The Central Limit Theorem The statistical foundation for x-charts is the central limit theorem. In general terms, this theorem states that regardless of the distribution of the population of all parts or services, the distribution of x’s (each of which is a mean of a sample drawn from the population) will tend to follow a normal curve as the sample size grows large. Fortunately, even if n is fairly small (say 4 or 5), the distributions of the averages will still roughly follow a normal curve. The theorem also states that (1) the mean of the distribution of the x’s (called x) will equal the mean of the overall population (called ); and (2) the standard deviation of the sampling distribution, sx, will be the population deviation, x, divided by the square root of the sample size, n. In other words, xm

and

sx 

sx n

The central limit theorem says that the distribution of sample means will follow a normal distribution as the sample size grows large.

M1-6

Module 1

FIGURE M1.2 Population and Sampling Distributions

STATISTICAL QUALITY CONTROL

Some population distributions

Normal

Beta

  (mean) x  S.D.

Uniform

  (mean) x  S.D.

  (mean) x  S.D.

Sampling distribution of sample means (always normal)

99.7% of all x fall within ±3x 95.5% of all x fall within ±2x

3x

2x

1x

X (mean)

Standard error  x 

1x

2x

3x

x n 

Figure M1.2 shows three possible population distributions, each with its own mean, , and standard deviation, x. If a series of random samples (x1, x2, x3, x4, and so on) each of size n is drawn from any one of these, the resulting distribution of xi’s will appear as in the bottom graph of that figure. Because this is a normal distribution (as discussed in Chapter 2), we can state that 1. 99.7% of the time, the sample averages will fall within 3sx if the process has only random variations. 2. 95.5% of the time, the sample averages will fall within 2sx if the process has only random variations. If a point on the control chart falls outside the 3sx control limits, we are 99.7% sure that the process has changed. This is the theory behind control charts.

Setting x -Chart Limits If we know through historical data the standard deviation of the process population, sx, we can set upper and lower control limits by these formulas: upper control limit (ULC)  x  zsx

(M1-1)

lower control limit (LCL)  x  zsx

(M1-2)

M1.4 Control Charts for Variables

M1-7

where x  mean of the sample means z  number of normal standard deviations (2 for 95.5% confidence, 3 for 99.7%) s sx  standard deviation of the sample means  x n Box-Filling Example Let us say that a large production lot of boxes of cornflakes is sampled every hour. To set control limits that include 99.7% of the sample means, 36 boxes are randomly selected and weighed. The standard deviation of the overall population of boxes is estimated, through analysis of old records, to be 2 ounces. The average mean of all samples taken is 16 ounces. We therefore have x  16 ounces, x  2 ounces, n  36, and z  3. The control limits are 2 36   16  1  17 ounces 2  16  3  16  1  15 ounces 36

UCL x  x  zsx  16  3 LCL x  x  zsx

MODELING IN THE REAL WORLD Defining the Problem

Developing a Model

Statistical Process Control at AVX-Kyocera

AVX-Kyocera, a Japanese-owned maker of electronic chip components located in Raleigh, North Carolina, needed to improve the quality of its products and services to achieve total customer satisfaction.

Statistical process control models such as x- and R-charts were chosen as appropriate tools.

Acquiring Input Data

Employees are empowered to collect their own data. For example, a casting machine operator measures the thickness of periodic samples that he takes from his process.

Developing a Solution

Employees plot data observations to generate SPC charts that track trends, comparing results with process limits and final customer specifications.

Testing the Solution

Samples at each machine are evaluated to ensure that the processes are indeed capable of achieving the desired results. Quality control inspectors are transferred to manufacturing duties as all plant personnel become trained in statistical methodology.

Analyzing the Results

Implementing the Results

Results of SPC are analyzed by individual operators to see if trends are present in their processes. Quality trend boards are posted in the building to display not only SPC charts, but also procedures, process document change approvals, and the names of all certified operators. Work teams are in charge of analysis of clusters of machines. The firm has implemented a policy of zero defectives at a very low tolerance for variable data and nearly zero defects for parts per million for attribute data. Source: Basile A. Denisson. “War with Defects and Peace with Quality,” Quality Progress (September 1993): 97–101.

M1-8

Module 1

STATISTICAL QUALITY CONTROL

If the process standard deviation is not available or is difficult to compute, which is usually the case, these equations become impractical. In practice, the calculation of control limits is based on the average range rather than on standard deviations. We may use the equations Control chart limits can be found using the range rather than the standard deviation.

UCL x  x  A2R

(M1-3)

LCL x  x  A2R

(M1-4)

where R  average of the samples A2  value found in Table M1.2 (which assumes that Z  3) x  mean of the sample means Here is an example. Super Cola bottles soft drinks labeled “net weight 16 ounces.” An overall process average of 16.01 ounces has been found by taking several batches of samples, where each sample contained five bottles. The average range of the process is 0.25 ounce. We want to determine the upper and lower control limits for averages for this process. Looking in Table M1.2 for a sample size of 5 in the mean factor A2 column, we find the number 0.577. Thus the upper and lower control chart limits are UCLx  x  A2R  16.01  (0.577)(0.25)  16.01  0.144  16.154 LCLx  x  A2R  16.01  0.144  15.866 The upper control limit is 16.154, and the lower control limit is 15.866.

Setting Range Chart Limits Dispersion or variability is also important. The central tendency can be under control, but ranges can be out of control.

We just determined the upper and lower control limits for the process average. In addition to being concerned with the process average, managers are interested in the dispersion or variability. Even though the process average is under control, the variability of the process may not be. For example, something may have worked itself loose in a piece of equipment. As a result, the average of the samples may remain the same, but the variation within the samples could be entirely too large. For this reason it is very common to find a control chart for ranges in order to monitor the process variability. The theory behind the control charts for ranges is the same for the process average. Limits are established that contain 3 standard deviations of the distribution for the average range R. With a few simplifying assumptions, we can set the upper and lower control limits for ranges: UCLR  D4R LCLR  D3R where UCLR  upper control chart limit for the range LCLR  lower control chart limit for the range D4 and D3  values from Table M1.2

(M1-5) (M1-6)

M1.5 Control Charts for Attributes

M1-9

T A B L E M 1 . 2 Factors for Computing Control Chart Limits SAMPLE SIZE, n

MEAN FACTOR, A2

UPPER RANGE, D4

LOWER RANGE, D3

2

1.880

3.268

0

3

1.023

2.574

0

4

0.729

2.282

0

5

0.577

2.114

0

6

0.483

2.004

0

7

0.419

1.924

0.076

8

0.373

1.864

0.136

9

0.337

1.816

0.184

10

0.308

1.777

0.223

12

0.266

1.716

0.284

14

0.235

1.671

0.329

16

0.212

1.636

0.364

18

0.194

1.608

0.392

20

0.180

1.586

0.414

25

0.153

1.541

0.459

Source: Reprinted by permission of the American Society for Testing and Materials, copyright. Taken from Special Technical Publication 15-C, “Quality Control of Materials,” pp. 63 and 72, 1951.

Range Example As an example, consider a process in which the average range is 53 pounds. If the sample size is 5, we want to determine the upper and lower control chart limits. Looking in Table M1.2 for a sample size of 5, we find that D4  2.114 and D3  0. The range control chart limits are UCLR  D4R  (2.114)(53 pounds)  112.042 pounds LCLR  D3R  (0)(53 pounds) 0

M1.5 CONTROL CHARTS FOR ATTRIBUTES ● Control charts for x and R do not apply when we are sampling attributes, which are typically classified as defective or nondefective. Measuring defectives involves counting them (for example, number of bad lightbulbs in a given lot, or number of letters or data entry

Sampling attributes differ from sampling variables.

M1-10

Module 1

STATISTICAL QUALITY CONTROL

Five Steps to Follow in Using x and R-Charts 1. Collect 20 to 25 samples of n  4 or n  5 each from a stable process and compute the mean and range of each. 2. Compute the overall means (x and R), set appropriate control limits, usually at the 99.7% level, and calculate the preliminary upper and lower control limits. If the process is not currently stable, use the desired mean, , instead of x to calculate limits. 3. Graph the sample means and ranges on their respective control charts and determine whether they fall outside the acceptable limits. 4. Investigate points or patterns that indicate the process is out of control. Try to assign cases for the variation and then resume the process. 5. Collect additional samples and, if necessary, revalidate the control limits using the new data.

records typed with errors); whereas variables are usually measured for length or weight. There are two kinds of attribute control charts: (1) those that measure the percent defective in a sample, called p-charts, and (2) those that count the number of defects, called c-charts.

p-Charts p-chart limits are based on the binomial distribution and are easy to compute.

p-charts are the principal means of controlling attributes. Although attributes that are either good or bad follow the binomial distribution, the normal distribution can be used to calculate p-chart limits when sample sizes are large. The procedure resembles the x-chart approach, which was also based on the central limit theorem. The formulas for p-chart upper and lower control limits follow: UCLp  p  zsp

(M1-7)

LCLp  p  zsp

(M1-8)

where p  mean fraction defective in the sample z  number of standard deviates (z = 2 for 95.5% limits; z = 3 for 99.7% limits) sp  standard deviation of the sampling distribution

p is estimated by the formula sp 



p(1  p) n

(M1-9)

where n is the size of each sample. ARCO p-Chart Example Using a popular database software package, data-entry clerks at ARCO key in thousands of insurance records each day. Samples of the work of 20 clerks are shown in the following table. One hundred records entered by each clerk were carefully examined to make sure that they contained no errors; the fraction defective in each sample was then computed.

M1.5 Control Charts for Attributes

SAMPLE NUMBER

NUMBER OF ERRORS

FRACTION DEFECTIVE

SAMPLE NUMBER

NUMBER OF ERRORS

1

6

0.06

11

6

0.06

2

5

0.05

12

1

0.01

3

0

0.00

13

8

0.08

4

1

0.01

14

7

0.07

5

4

0.04

15

5

0.05

6

2

0.02

16

4

0.04

7

5

0.05

17

11

0.11

8

3

0.03

18

3

0.03

9

3

0.03

19

0

0.00

10

2

0.02

20

04

0.04

M1-11

FRACTION DEFECTIVE

80

We want to set control limits that include 99.7% of the random variation in the entry process when it is in control. Thus, z  3. p

(0.04)(1100 0.04)  0.02 a

sp 

total number of errors 80   0.04 total number of records examined (100)(20)

(Note: 100 is the size of each sample  n) LCLp  p  zsp  0.04  3(0.02)  0

a

UCLp  p  zsp  0.04  3(0.02)  0.10

(since we cannot have a negative percent defective) When we plot the control limits and the sample fraction defectives, we find that only one data-entry clerk (number 17) is out of control. The firm may wish to examine that person’s work a bit more closely to see whether a serious problem exists (see Figure M1.3). Using Excel QM for SPC Excel and other spreadsheets are extensively used in industry to maintain control charts. Excel QM’s Quality Module has the ability to develop xcharts, p-charts, and c-charts. Programs M1.1a and M1.1b illustrate Excel QM’s spreadsheet approach to computing the p-chart control limits for the ARCO example. Program M1.1a shows both the data input and formulas. Program M1.1b provides output. Excel also contains a built-in graphing ability with Chart Wizard.

c-Charts In the ARCO example above, we counted the number of defective database records entered. A defective record was one that was not exactly correct. A bad record may contain more than one defect, however. We use c-charts to control the number of defects per unit of output (or per insurance record in the case above).

c-charts count the number of defects, whereas p-charts track the percentage defective.

M1-12

Module 1

STATISTICAL QUALITY CONTROL

Fraction defective

FIGURE M1.3 p-Chart for Data Entry for ARCO 0.12 0.11 0.10 0.09 0.08 0.07

UCLp  0.10

0.06 0.05 0.04 0.03 0.02

p  0.04

0.01

LCLp  0.00

0.00 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Sample number

Control charts for defects are helpful for monitoring processes in which a large number of potential errors can occur but the actual number that do occur is relatively small. Defects may be mistyped words in a newspaper, blemishes on a table, or missing pickles on a fast-food hamburger. The Poisson probability distribution, which has a variance equal to its mean, is the basis for c-charts. Since c is the mean number of defects per unit, the standard deviation is equal to c. To compute 99.7% control limits for c, we use the formula c  3c PROGRAM M1.1A Excel QM’s p-Chart Program Applied to the ARCO Data, Showing Input Data and Formulas

(M1-10)

Summary

M1-13

PROGRAM M1.1B Output from Excel QM’s p-Chart Analysis of the ARCO Data

Here is an example. Red Top Cab Company c-Chart Example Red Top Cab Company receives several complaints per day about the behavior of its drivers. Over a nine-day period (where days are the units of measure), the owner received the following numbers of calls from irate passengers: 3, 0, 8, 9, 6, 7, 4, 9, 8, for a total of 54 complaints. To compute 99.7% control limits, we take c

54  6 complaints per day 9

Thus, LCLc  c  3c  6  36  6  3(2.45)  0

a

UCLc  c  3c  6  36  6  3(2.45)  13.35

(because we cannot have a negative control limit) After the owner plotted a control chart summarizing these data and posted it prominently in the drivers’ locker room, the number of calls received dropped to an average of three per day. Can you explain why this may have occurred?

Summary To the manager of a firm producing goods or services, quality is the degree to which the product meets specifications. Quality control has become one of the most important precepts of business. The expression “quality cannot be inspected into a product” is a central theme of organizations today. More and more world-class companies are following the ideas of total quality management (TQM), which emphasizes the entire organization, from supplier to customer.

M1-14

Module 1

STATISTICAL QUALITY CONTROL

Statistical aspects of quality control date to the 1920s but are of special interest in our global marketplaces of this new century. Statistical process control (SPC) tools described in this chapter include the x- and R-charts for variable sampling and the p- and c-charts for attribute sampling.

Glossary Quality. The degree to which a product or service meets the specifications set for it. Total Quality Management (TQM). An emphasis on quality that encompasses the entire organization. Control Chart. A graphic presentation of process data over time. Natural Variations. Variabilities that affect almost every production process to some degree and are to be expected; also known as common causes. Assignable Variation. Variation in the production process that can be traced to specific causes. x-Chart. A quality control chart for variables that indicates when changes occur in the central tendency of a production process. R-Chart. A process control chart that tracks the “range” within a sample; indicates that a gain or loss of uniformity has occurred in a production process. Central Limit Theorem. The theoretical foundation for x-charts. It states that regardless of the distribution of the population of all parts or services, the distribution of x’s will tend to follow a normal curve as the sample size grows. p-Chart. A quality control chart that is used to control attributes. c-Chart. A quality control chart that is used to control the number of defects per unit of output.

Key Equations (M1-1) Upper control limit (UCL)  x  Zsx The upper limit for an x-chart using standard deviations. (M1-2) Lower control limit (LCL)  x  Zsx The lower control limit for an x-chart using standard deviations. (M1-3) UCLx  x  A2R The upper control limit for an x-chart using tabled values and ranges. (M1-4) LCLx  x  A2R The lower control limit for an x-chart using tabled values and ranges. (M1-5) UCLR  D4R Upper control limit for a range chart. (M1-6) LCLR  D3R Lower control limit for a range chart. (M1-7) UCLp  p  zsp Upper control unit for a p-chart. (M1-8) LCLp  p  zsp Lower control limit for a p-chart. p(1  p) (M1-9) sp  n



The standard deviation of a binomial distribution. (M1-10) c  3c The upper and lower limits for a c-chart.

Solved Problems

Solved Problems Solved Problem M1-1 The manufacturer of precision parts for drill presses produces round shafts for use in the construction of drill presses. The average diameter of a shaft is 0.56 inch. The inspection samples contain six shafts each. The average range of these samples is 0.006 inch. Determine the upper and lower control chart limits. Solution The mean factor A2 from Table M1.2, where the sample size is 6, is seen to be 0.483. With this factor, you can obtain the upper and lower control limits: UCLx  0.56  (0.483)(0.006)

LCLx  0.56  0.0029

 0.56  0.0029  0.5629

 0.5571

Solved Problem M1-2 Nocaf Drinks, Inc., a producer of decaffeinated coffee, bottles Nocaf. Each bottle should have a net weight of 4 ounces. The machine that fills the bottles with coffee is new, and the operations manager wants to make sure that it is properly adjusted. The operations manager takes a sample of n  8 bottles and records the average and range in ounces for each sample. The data for several samples are given in the following table. Note that every sample consists of 8 bottles.

SAMPLE

SAMPLE RANGE

SAMPLE AVERAGE

SAMPLE

SAMPLE RANGE

SAMPLE AVERAGE

A

0.41

4.00

E

0.56

4.17

B

0.55

4.16

F

0.62

3.93

C

0.44

3.99

G

0.54

3.98

D

0.48

4.00

H

0.44

4.01

Is the machine properly adjusted and in control? Solution We first find that x  4.03 and R  0.51. Then, using Table M1.2, we find UCLx  x  A2R  4.03  (0.373)(0.51)  4.22 LCLx  x  A2R  4.03  (0.373)(0.51)  3.84 UCLR  D4R  (1.864)(0.51)  0.95 LCLR  D3R  (0.136)(0.51)  0.07 It appears that the process average and range are both in control. Solved Problem M1-3 Crabill Electronics, Inc., makes resistors, and among the last 100 resistors inspected, the percent defective has been 0.05. Determine the upper and lower limits for this process for 99.7% confidence.

M1-15

M1-16

Module 1

STATISTICAL QUALITY CONTROL

Solution



UCLp  p  3



p(1  p) (0.05)(1  0.05)  0.05  3 n 100

 0.05  3(0.0218)  0.1154



LCLp  p  3

p(1  p)  0.05  3(0.0218) n

 0.05  0.0654  0 (since percent defective cannot be negative)

Self-Test

M1-17

SELF-TEST • • •

Before taking the self-test, refer back to the learning objectives at the beginning of the module, the notes in the margins, and the glossary at the end of the module. Use the key at the back of the book to correct your answers. Restudy pages that correspond to any questions that you answered incorrectly or material you feel uncertain about.

1. Quality is defined as ________________ . 2. A control chart is a. a means of monitoring output. b. a graphic presentation of data over time. c. a chart with upper and lower control limits. d. all of the above. e. none of the above. 3. The type of chart used to control the number of defects per unit of output is the a. x-bar chart. b. R-chart. c. p-chart. d. all of the above. e. none of the above. 4. Control charts for attributes are a. p-charts. b. m-charts. c. R-charts. d. x-charts. e. none of the above. 5. c-Charts are based on the a. Poisson distribution. b. normal distribution. c. Erlang distribution. d. hyper Erlang distribution. e. binomial distribution. f. none of the above. 6. If a sample of parts are measured and the mean of the sample measurement is outside the tolerance limits, a. the process is out of control and the cause can be established. b. the process is in control but is not capable of producing within the established control limits.

7.

8.

9. 10.

11.

c. the process is within the established control limits with only natural causes of variation. d. all of the above are true. e. none of the above are true. If a sample of parts are measured and the mean of the sample measurement is in the middle of the tolerance limits but some parts measure too low and other parts measure too high, a. the process is out of control and the cause can be established. b. the process is in control but is not capable of producing within the established control limits. c. the process is within the established control limits with only natural causes of variation. d. all of the above are true. e. none of the above are true. A goal of 6 means expecting a. 2,700 errors per million parts. b. 95.45% accuracy. c. 99.73% of errors are caught. d. 3.4 errors per million parts. e. none of the above. A range chart monitors ________________ . If a 95.5% level of confidence is desired, the x-chart limits will be set plus or minus ________________ . The two techniques discussed to find and resolve assignable variations in process control are the ______________ and the ______________ .

M1-18

Module 1

STATISTICAL QUALITY CONTROL

Discussion Questions and Problems Discussion Questions M1-1

Why is the central limit theorem so important in statistical quality control?

M1-2

Why are x- and R-charts usually used hand in hand?

M1-3

Explain the differences among the four types of control charts.

M1-4

What might cause a process to be out of control?

M1-5

Explain why a process can be out of control even though all the samples fall within the upper and lower control limits.

Problems* • M1-6 Shader Storage Technologies produces refrigeration units for food producers and retail food establishments. The overall average temperature that these units maintain is 46 Fahrenheit. The average range is 2 Fahrenheit. Samples of 6 are taken to monitor the process. Determine the upper and lower control chart limits for averages and ranges for these refrigeration units.

• M1-7

• M1-8

•• M1-9

When set at the standard position, Autopitch can throw hard balls toward a batter at an average speed of 60 mph. Autopitch devices are made for both major- and minorleague teams to help them improve their batting averages. Autopitch executives take samples of 10 Autopitch devices at a time to monitor these devices and to maintain the highest quality. The average range is 3 mph. Using control-chart techniques, determine control-chart limits for averages and ranges for Autopitch. Zipper Products, Inc., produces granola cereal, granola bars, and other natural food products. Its natural granola cereal is sampled to ensure proper weight. Each sample contains eight boxes of cereal. The overall average for the samples is 17 ounces. The range is only 0.5 ounce. Determine the upper and lower control-chart limits for averages for the boxes of cereal. Small boxes of NutraFlakes cereal are labeled “net weight 10 ounces.” Each hour, random samples of size n  4 boxes are weighed to check process control. Five hours of observations yielded the following:

WEIGHT Time

* Note:

Box 1

Box 2

Box 3

Box 4

9 A.M.

9.8

10.4

9.9

10.3

10 A.M.

10.1

10.2

9.9

9.8

11 A.M.

9.9

10.5

10.3

10.1

Noon

9.7

9.8

10.3

10.2

1 A.M.

9.7

10.1

9.9

9.9

means the problem may be solved with QM for Windows;

be solved with Excel QM; and and/or Excel QM.

means the problem may

means the problem may be solved with QM for Windows

Discussion Questions and Problems

Using these data, construct limits for x- and R-charts. Is the process in control? What other steps should the QC department follow at this point?

•• M1-10 Sampling four pieces of precision-cut wire (to be used in computer assembly) every hour for the past 24 hours has produced the following results:

HOUR

x

R

HOUR

x

R

1

3.25

0.71

13

3.11

0.85

2

3.10

1.18

14

2.83

1.31

3

3.22

1.43

15

3.12

1.06

4

3.39

1.26

16

2.84

0.50

5

3.07

1.17

17

2.86

1.43

6

2.86

0.32

18

2.74

1.29

7

3.05

0.53

19

3.41

1.61

8

2.65

1.13

20

2.89

1.09

9

3.02

0.71

21

2.65

1.08

10

2.85

1.33

22

3.28

0.46

11

2.83

1.17

23

2.94

1.58

12

2.97

0.40

24

2.64

0.97

Develop appropriate control limits and determine whether there is any cause for concern in the cutting process.

•• M1-11 Due to the poor quality of various semiconductor products used in their manufacturing process, Microlaboratories has decided to develop a quality control program. Because the semiconductor parts they get from suppliers are either good or defective, Milton Fisher has decided to develop control charts for attributes. The total number of semiconductors in every sample is 200. Furthermore, Milton would like to determine the upper control chart limit and the lower control chart limit for various values of the fraction defective (p) in the sample taken. To allow more flexibility, he has decided to develop a table that lists value for p, UCL, and LCL. The values for p should range from 0.01 to 0.1, incrementing by 0.01 each time. What are the UCLs and the LCLs for 99.7% confidence?

•• M1-12 For the past two months, Suzan Shader has been concerned about machine number 5 at the West Factory. To make sure that the machine is operating correctly, samples are taken, and the average and range for each sample is computed. Each sample consists of 12 items produced from the machine. Recently, 12 samples were taken, and for each, the sample range and average were computed. The sample range and sample average were 1.1 and 46 for the first sample, 1.31 and 45 for the second sample, 0.91 and 46 for the third sample, and 1.1 and 47 for the fourth sample. After the fourth sample, the sample averages increased. For the fifth sample, the range was 1.21 and the average was 48; for sample number 6 it was 0.82 and 47; for sample number 7, it was 0.86 and 50; and for the eighth sample, it was 1.11 and 49. After the eighth sample, the sample average continued to increase, never getting below 50. For sample number 9, the range and average were 1.12 and 51; for sample number 10, they were

M1-19

M1-20

Module 1

STATISTICAL QUALITY CONTROL

0.99 and 52; for sample number 11, they were 0.86 and 50; and for sample number 12, they were 1.2 and 52. Although Suzan’s boss wasn’t overly concerned about the process, Suzan was. During installation, the supplier set a value of 47 for the process average with an average range of 1.0. It was Suzan’s feeling that something was definitely wrong with machine number 5. Do you agree?

•• M1-13 Kitty Products caters to the growing market for cat supplies, with a full line of products, ranging from litter to toys to flea powder. One of its newer products, a tube of fluid that prevents hair balls in long-haired cats, is produced by an automated machine that is set to fill each tube with 63.5 grams of paste. To keep this filling process under control, four tubes are pulled randomly from the assembly line every 4 hours. After several days, the data shown in the following table resulted. Set control limits for this process and graph the sample data for both the xand R-charts.

Sample No.

x

R

Sample No.

x

R

Sample No.

x

R

1

63.5

2.0

10

63.5

1.3

18

63.6

1.8

2

63.6

1.0

11

63.3

1.8

19

63.8

1.3

3

63.7

1.7

12

63.2

1.0

20

63.5

1.6

4

63.9

0.9

13

63.6

1.8

21

63.9

1.0

5

63.4

1.2

14

63.3

1.5

22

63.2

1.8

6

63.0

1.6

15

63.4

1.7

23

63.3

1.7

7

63.2

1.8

16

63.4

1.4

24

64.0

2.0

8

63.3

1.3

17

63.5

1.1

25

63.4

1.5

9

63.7

1.6

• M1-14 The smallest defect in a computer chip will render the entire chip worthless. Therefore, tight quality control measures must be established to monitor the chips. In the past, the percentage defective for these chips for a California-based company has been 1.1%. The sample size is 1,000. Determine the upper and lower control-chart limits for these computer chips. Use z  3.

•• M1-15 Barbara Schwartz’s Office Supply Company manufactures paper clips and other office products. Although inexpensive, paper clips have provided Barbara with a high margin of profitability. The percentage defective for paper clips produced by Office Supply Company has been averaging 2.5%. Samples of 200 paper clips are taken. Establish the upper and lower control-chart limits for this process at 99.7% confidence.

•• M1-16 Daily samples of 100 power drills are removed from Drill Master’s assembly line and inspected for defects. Over the past 21 days, the following information has been gathered. Develop a 3 standard deviation (99.7% confidence) p-chart and graph the samples. Is the process in control?

Case Study

DAY

NUMBER OF DEFECTIVE DRILLS

DAY

NUMBER OF DEFECTIVE DRILLS

1

6

12

5

2

5

13

4

3

6

14

3

4

4

15

4

5

3

16

5

6

4

17

6

7

5

18

5

8

3

19

4

9

6

20

3

10

3

21

7

11

7

M1-21

•M1-17 A random sample of 100 Modern Art dining room tables that came off the firm’s assembly line is examined. Careful inspection reveals a total of 2,000 blemishes. What are the 99.7% upper and lower control limits for the number of blemishes? If one table had 42 blemishes, should any special action be taken?

Case Study Bayfield Mud Company In November 1998, John Wells, a customer service representative of Bayfield Mud Company, was summoned to the Houston, Texas, warehouse of Wet-Land Drilling, Inc., to inspect three boxcars of mud treating agents that Bayfield Mud Company had shipped to the Houston firm. (Bayfield’s corporate offices and its largest plant are located in Orange, Texas, which is just west of the Louisiana—Texas border.) Wet-Land Drilling had filed a complaint that the 50-pound bags of treating agents that it had just received from Bayfield were shortweighted by approximately 5%. The light-weight bags were initially detected by one of Wet-Land’s receiving clerks, who noticed that the railroad scale tickets indicated that the net weights were significantly less on all three of the boxcars than those of identical shipments received on October 25, 1998. Bayfield’s traffic department was called to determine whether lighter-weight dunnage or pallets were used on the shipments. (This might explain the lighter net weights.) Bayfield indicated, however, that no changes had been made in the loading or palletizing procedures. Hence, Wet-Land randomly checked 50 of the bags and discovered that the average net weight was 47.51 pounds. They noted from past shipments that the bag net weights averaged exactly 50.0 pounds, with an acceptable standard deviation of

1.2 pounds. Consequently, they concluded that the sample indicated a significant short-weight. (The reader may wish to verify this conclusion.) Bayfield was then contacted, and Wells was sent to investigate the complaint. Upon arrival, Wells verified the complaint and issued a 5% credit to Wet-Land. Wet-Land’s management, however, was not completely satisfied with only the issuance of credit for the short shipment. The charts followed by their mud engineers on the drilling platforms were based on 50-pound bags of treating agents. Lighterweight bags might result in poor chemical control during the drilling operation and might adversely affect drilling efficiency. (Mud treating agents are used to control the pH and other chemical properties of the cone during drilling operation.) This could cause severe economic consequences because of the extremely high cost of oil and natural gas well drilling operations. Consequently, special use instructions had to accompany the delivery of these shipments to the drilling platforms. Moreover, the light-weight shipments had to be isolated in Wet-Land’s warehouse, causing extra handling and poor space utilization. Hence, Wells was informed that Wet-Land Drilling might seek a new supplier of mud treating agents if, in the future, it received bags that deviated significantly from 50 pounds. The quality control department at Bayfield suspected that the light-weight bags may have resulted from “growing pains”

M1-22

Module 1

STATISTICAL QUALITY CONTROL

TIME

AVERAGE WEIGHT (POUNDS)

TIME

AVERAGE WEIGHT (POUNDS)

Smallest

Largest

Smallest

Largest

6:00 A.M.

49.6

48.7

50.7

6:00 A.M.

46.8

41.0

51.2

7:00

50.2

8:00

50.6

49.1

51.2

7:00

50.0

46.2

51.7

49.6

51.4

8:00

47.4

44.0

48.7

9:00

50.8

50.2

51.8

9:00

47.0

44.2

48.9

10:00

49.9

49.2

52.3

10:00

47.2

46.6

50.2

11:00

50.3

48.6

51.7

11:00

48.6

47.0

50.0

12 noon

48.6

46.2

50.4

12 midnight

49.8

48.2

50.4

1:00 P.M.

49.0

46.4

50.0

1:00 A.M.

49.6

48.4

51.7

2:00

49.0

46.0

50.6

2:00

50.0

49.0

52.2

3:00

49.8

48.2

50.8

3:00

50.0

49.2

50.0

4:00

50.3

49.2

52.7

4:00

47.2

46.3

50.5

5:00

51.4

50.0

55.3

5:00

47.0

44.1

49.7

6:00

51.6

49.2

54.7

6:00

48.4

45.0

49.0

7:00

51.8

50.0

55.6

7:00

48.8

44.8

49.7

8:00

51.0

48.6

53.2

8:00

49.6

48.0

51.8

9:00

50.5

49.4

52.4

9:00

50.0

48.1

52.7

10:00

49.2

46.1

50.7

10:00

51.0

48.1

55.2

11:00

49.0

46.3

50.8

11:00

50.4

49.5

54.1

12 midnight

48.4

45.4

50.2

12 noon

50.0

48.7

50.9

1:00 A.M.

47.6

44.3

49.7

1:00 P.M.

48.9

47.6

51.2

2:00

47.4

44.1

49.6

2:00

49.8

48.4

51.0

3:00

48.2

45.2

49.0

3:00

49.8

48.8

50.8

4:00

48.0

45.5

49.1

4:00

50.0

49.1

50.6

5:00

48.4

47.1

49.6

5:00

47.8

45.2

51.2

6:00

48.6

47.4

52.0

6:00

46.4

44.0

49.7

7:00

50.0

49.2

52.2

7:00

46.4

44.4

50.0

8:00

49.8

49.0

52.4

8:00

47.2

46.6

48.9

9:00

50.3

49.4

51.7

9:00

48.4

47.2

49.5

10:00

50.2

49.6

51.8

10:00

49.2

48.1

50.7

11:00

50.0

49.0

52.3

11:00

48.4

47.0

50.8

12 noon

50.0

48.8

52.4

12 midnight

47.2

46.4

49.2

1:00 A.M.

50.1

49.4

53.6

1:00 A.M.

47.4

46.8

49.0

2:00

49.7

48.6

51.0

2:00

48.8

47.2

51.4

3:00

48.4

47.2

51.7

3:00

49.6

49.0

50.6

4:00

47.2

45.3

50.9

4:00

51.0

50.5

51.5

5:00

46.8

44.1

49.0

5:00

50.5

50.0

51.9

RANGE

RANGE

M1-23

Case Study

at the Orange plant. Because of the earlier energy crisis, oil and natural gas exploration activity had greatly increased. This increased activity, in turn, created increased demand for products produced by related industries, including drilling muds. Consequently, Bayfield had to expand from a one-shift (6:00 A.M. to 2:00 P.M.) to a two-shift (6:00 A.M. to 10:00 P.M.) operation in mid-1996, and finally to a three-shift operation (24 hours per day) in the fall of 1998. The additional night-shift bagging crew was staffed entirely by new employees. The most experienced supervisors were temporarily assigned to supervise the night-shift employees. Most emphasis was placed on increasing the output of bags to meet the ever-increasing demand. It was suspected that only occasional reminders were made to double-check the bag weight-feeder. (A double-check is performed by systemati-

cally weighing a bag on a scale to determine whether the proper weight is being loaded by the weight-feeder. If there is significant deviation from 50 pounds, corrective adjustments are made to the weight-release mechanism.) To verify this expectation, the quality control staff randomly sampled the bag output and prepared the chart found on the previous page. Six bags were sampled and weighed each hour. Discussion Questions 1. What is your analysis of the bag weight problem? 2. What procedures would you recommend to maintain proper quality control? Source: Professor Jerry Kinard, Western Carolina University.

Case Study Morristown Daily Tribune In July 1998, the Morristown Daily Tribune published its first newspaper in direct competition with two other newspapers, the Morristown Daily Ledger and the Clarion Herald, a weekly publication. Presently, the Ledger is the most widely read newspaper in the area, with a total circulation of 38,500. The Tribune, however, has made significant inroads into the readership market since its inception. Total circulation of the Tribune now exceeds 27,000. Rita Bornstein, editor of the Tribune, attributes the success of the newspaper to the accuracy of its contents, a strong editorial section, and the proper blending of local, regional, national, and international news items. In addition, the paper has been successful in getting the accounts of several major retailers who advertise extensively in the display section. Finally, experienced reporters, photographers, copy writers,

PARAGRAPHS WITH ERRORS SAMPLE IN THE SAMPLE

FRACTION OF PARAGRAPHS WITH ERRORS (PER 100)

typesetters, editors, and other personnel have formed a team dedicated to providing the most timely and accurate reporting of news in the area. Of critical importance to good-quality newspaper printing is accurate typesetting. To assure quality in the final print, Ms. Bornstein has decided to develop a procedure for monitoring the performance of typesetters over a period of time. Such a procedure involves sampling output, establishing control limits, comparing the Tribune’s accuracy with that of the industry, and occasionally updating the information. First, Ms. Bornstein randomly selected 30 newspapers published during the preceding 12 months. From each paper, 100 paragraphs were randomly chosen and were read for accuracy. The number of paragraphs with errors in each paper was recorded, and the fraction of paragraphs with errors in each sample was determined. The following table shows the results of the sampling:

SAMPLE

PARAGRAPHS WITH ERRORS IN THE SAMPLE

FRACTION OF PARAGRAPHS WITH ERRORS (PER 100)

1

2

0.02

16

2

0.02

2

4

0.04

17

3

0.03

3

10

0.10

18

7

0.07

4

4

0.04

19

3

0.03

5

1

0.01

20

2

0.02

6

1

0.01

21

3

0.03

(table continued on page M1-24)

M1-24

Module 1

STATISTICAL QUALITY CONTROL

PARAGRAPHS WITH ERRORS IN THE SAMPLE

FRACTION OF PARAGRAPHS WITH ERRORS (PER 100)

SAMPLE

PARAGRAPHS WITH ERRORS IN THE SAMPLE

FRACTION OF PARAGRAPHS WITH ERRORS (PER 100)

7

13

0.13

22

7

0.07

8

9

0.09

23

4

0.04

9

11

0.11

24

3

0.03

10

0

0.00

25

2

0.02

11

3

0.03

26

2

0.02

12

4

0.04

27

0

0.00

13

2

0.02

28

1

0.01

14

2

0.02

29

3

0.03

15

8

0.08

30

4

0.04

SAMPLE

Discussion Questions 1. Plot the overall fraction of errors ( p) and the upper and lower control limits on a control chart using a 95% confidence level. 2. Assume that the industry upper and lower control limits are 0.1000 and 0.0400, respectively. Plot them on the control chart.

3. Plot the fraction of errors in each sample. Do all fall within the firm’s control limits? When one falls outside the control limits, what should be done?

Source: Professor Jerry Kinard, Western Carolina University.

Bibliography Berry, L. L., A. Parasuraman, and V. A. Zeithaml. “Improving Service Quality in America: Lessons Learned,” The Academy of Management Executive 8, 2 (May 1994): 32–52.

DeVor, R. E., T. Chang, and J. W. Sutherland. Statistical Quality Design and Control: Contemporary Concepts and Methods. New York: Macmillan Publishing Co., Inc., 1992.

Besterfield, D. H. Quality Control, 4th ed. Upper Saddle River, NJ: Prentice Hall, 1994.

Dobyns, L., and C. Crawford-Mason. Quality or Else: The Revolution in World Business. New York: Houghton Mifflin Company, 1991.

Carr, L. P. “Applying Cost of Quality to a Service Business,” Sloan Management Review 33, 4 (Summer 1992): 72. Costin, H. Readings in Total Quality Management. New York: Dryden Press, 1994.

Elsayed, E. A., and D. Dietrich. “Quality Control and Its Applications in Production Systems,” Industrial Engineering Research & Development 24, 5 (November 1992): 2–3.

Crosby, P. B. Let’s Talk Quality. New York: McGraw-Hill Book Company, 1989.

Feigenbaum, A. V. Total Quality Control, 3rd ed. New York: McGraw-Hill Book Company, 1991.

Crosby, P. B. Quality Is Free. New York: McGrawHill Book Company, 1979.

Juran, J. M. “Made in the U.S.A.: A Renaissance in Quality,” Harvard Business Review 14, 4 (July–August 1993): 35–38.

Deming, W. E. Out of the Crisis. Cambridge, MA: MIT Center for Advanced Engineering Study, 1986. Denton, D. K. “Lessons on Competitiveness: Motorola’s Approach,” Production and Inventory Management Journal 32, 3 (Third Quarter 1991): 22.

Mitra, A. Fundamentals of Quality Control and Improvement, 2nd ed. Upper Saddle River, NJ: Prentice Hall, 1998. Wheeler, D. J. “Why Three Sigma Limits?” Quality Digest (August 1996): 63–64.

Appendix M1.1: Using QM for Windows for SPC

M1-25

● APPENDIX M1.1: USING QM FOR WINDOWS FOR SPC QM for Windows’ quality control module has the ability to compute all of the SPC control charts that we introduced in this chapter. To illustrate, Program M1.2 uses the p-chart data for ARCO found in Section M1.5. It computes p-bar, the standard deviation, and upper and lower control limits. PROGRAM M1.2 QM for Windows Analysis of ARCO’s Data to Compute p-chart Control Limits

M O D U L E

2

Dynamic Programming

LEARNING OBJECTIVES After completing this module, students will be able to: 1. 2. 3. 4. 5.

Understand the overall approach of dynamic programming. Use dynamic programming to solve the shortest-route problem. Develop dynamic programming stages. Describe important dynamic programming terminology. Describe the use of dynamic programming in solving knapsack problems.

MODULE OUTLINE M2.1 M2.2 M2.3 M2.4 M2.5

Introduction Shortest-Route Problem Solved by Dynamic Programming Dynamic Programming Terminology Dynamic Programming Notation Knapsack Problem Summary • Glossary • Key Equations • Solved Problem • Self-Test • Discussion Questions and Problems • Case Study: United Trucking • Internet Case Study • Bibliography

M2-1

M2-2

Module 2

DYNAMIC PROGRAMMING

M2.1 INTRODUCTION ●

Dynamic programming breaks a difficult problem into subproblems.

Dynamic programming is a quantitative analysis technique that has been applied to large, complex problems that have sequences of decisions to be made. Dynamic programming divides problems into a number of decision stages; the outcome of a decision at one stage affects the decision at each of the next stages. The technique is useful in a large number of multiperiod business problems, such as smoothing production employment, allocating capital funds, allocating salespeople to marketing areas, and evaluating investment opportunities. Dynamic programming differs from linear programming in two ways. First, there is no algorithm (like the simplex method) that can be programmed to solve all problems. Dynamic programming is, instead, a technique that allows us to break up difficult problems into a sequence of easier subproblems, which are then evaluated by stages. Second, linear programming is a method that gives single-stage (one time period) solutions. Dynamic programming has the power to determine the optimal solution over a one-year time horizon by breaking the problem into 12 smaller one-month time horizon problems and to solve each of these optimally. Hence it uses a multistage approach. Solving problems with dynamic programming involves four steps: Four Steps of Dynamic Programming 1. Divide the original problem into subproblems called stages. 2. Solve the last stage of the problem for all possible conditions or states. 3. Working backward from the last stage, solve each intermediate stage. This is done by determining optimal policies from that stage to the end of the problem (last stage). 4. Obtain the optimal solution for the original problem by solving all stages sequentially.

In this module we show you how to solve two types of dynamic programming problems: network and nonnetwork. The shortest-route problem is a network problem that can be solved by dynamic programming. The knapsack problem is an example of a nonnetwork problem that can be solved using dynamic programming.

M2.2 SHORTEST-ROUTE PROBLEM SOLVED BY DYNAMIC ● PROGRAMMING

The first step is to divide the problem into subproblems or stages.

George Yates is about to make a trip from Rice, Georgia (1) to Dixieville, Georgia (7). George would like to find the shortest route. Unfortunately, there are a number of small towns between Rice and Dixieville. His road map is shown in Figure M2.1. The circles on the map, called nodes, represent cities such as Rice, Dixieville, Brown, and so on. The arrows, called arcs, represent highways between the cities. The distance in miles is indicated along each arc. This problem can, of course, be solved by inspection. But seeing how dynamic programming can be used on this simple problem will teach you how to solve larger and more complex problems. Step 1: The first step is to divide the problem into subproblems or stages. Figure M2.2 reveals the stages of this problem. In dynamic programming, we usually start with the last part of the problem, stage 1, and work backward to the beginning of the problem or network, which is stage 3 in this problem. Table M2.1 summarizes the arcs and arc distances for each stage.

FIGURE M2.1 Highway Map between Rice and Dixieville Lakecity

Athens 10 Miles

4

5 14

les

1

ile s

4

i 2M

Mi

7 Dixieville

4

5 Miles

M

Brown

1

Mi

les

les

3

M

6M

ile

ile

s

2

2M

ile s

Rice

s

10 Miles

2 Hope

6 Georgetown

FIGURE M2.2 Three Stages for the George Yates Problem

A Node A Branch

5 14

4

1

10

4

5 2

3

12 6

7 2

4

6

10

2 Stage 3

Stage 2

TABLE M2.1

Stage 1

Distance Along Each Arc

STAGE

ARC

ARC DISTANCE

1

5–7

14

6–7

2

4–5

10

3–5

12

3–6

6

2–5

4

2–6

10

1–4

4

1–3

5

1–2

2

2

3

M2-3

M2-4

Module 2

DYNAMIC PROGRAMMING

FIGURE M2.3 Solution for the One-Stage Problem

5 14

4 1

Minimum Distance to Node 7 from Node 5

14

10

4

12 3 6

5 2

7 2

4

10

6

2 2

The second step is to solve the last stage—stage 1.

Minimum Distance to Node 7 from Node 6

Step 2: We next solve stage 1, the last part of the network. Usually, this is trivial. We find the shortest path to the end of the network, node 7 in this problem. At stage 1, the shortest paths from node 5, and node 6 to node 7 are the only paths. You may also note in Figure M2.3 that the minimum distances are enclosed in boxes by the entering nodes to stage 1, node 5 and node 6. The objective is to find the shortest distance to node 7. The following table summarizes this procedure for stage 1. As mentioned previously, the shortest distance is the only distance at stage 1. STAGE 1

Step 3 involves moving backward solving intermediate stages.

BEGINNING NODE

SHORTEST DISTANCE TO NODE 7

ARCS ALONG THIS PATH

5

14

5–7

6

2

6–7

Step 3: Moving backward, we now solve for stages 2 and 3. At stage 2 we will use Figure M2.4. If we are at node 4, the shortest and only route to node 7 is arcs 4–5 and 5–7. At node 3, the shortest route is arcs 3–6 and 6–7 with a total minimum distance of 8 miles. If we are at node 2, the shortest route is arcs 2–6 and 6–7 with a minimum total distance of 12 miles. This information is summarized in the stage 2 table: STAGE 2 BEGINNING NODE

SHORTEST DISTANCE TO NODE 7

ARCS ALONG THIS PATH

4

24

4–5 5–7

3

8

3–6 6–7

2

12

2–6 6–7

M2.3 Dynamic Programming Terminology

Minimum Distance to Node 7 from Node 4

M2-5

FIGURE M2.4 Solution for the Two-Stage Problem

24

4 10 Minimum Distance to Node 7 from Node 3

14

8 12

4

1

5 2

3

5

14

7

6 2 4

10

6

2 2 Minimum Distance to Node 7 from Node 2

12

The solution to stage 3 can be completed using the accompanying table and the network in Figure M2.5. STAGE 3 BEGINNING NODE

SHORTEST DISTANCE TO NODE 7

ARCS ALONG THIS PATH

1

13

1–3 3–6 6–7

Step 4: To obtain the optimal solution at any stage, all we consider are the arcs to the next stage and the optimal solution at the next stage. For stage 3, we only have to consider the three arcs to stage 2 (1–2, 1–3, and 1–4) and the optimal policies at stage 2, given in a previous table. This is how we arrived at the preceding solution. When the procedure is understood, we can perform all the calculations on one network. You may want to study the relationship between the networks and the tables because more complex problems are usually solved by using tables only.

M2.3 DYNAMIC PROGRAMMING TERMINOLOGY ● Regardless of the type or size of a dynamic programming problem, there are some important terms and concepts that are inherent in every problem. Some of the more important follow: 1. Stage: a period or a logical subproblem. 2. State variables: possible beginning situations or conditions of a stage. These have also been called the input variables.

The fourth and final step is to find the optimal solution after all stages have been solved.

M2-6

Module 2

DYNAMIC PROGRAMMING

FIGURE M2.5 Solution for the Three-Stage Problem

24 Minimum Distance to Node 7 from Node 1

4

13

8

10 14

5 14

4 1

5 2

12 3 6

7 2

4

10

6

2 2 12

3. Decision variables: alternatives or possible decisions that exist at each stage. 4. Decision criterion: a statement concerning the objective of the problem. 5. Optimal policy: a set of decision rules, developed as a result of the decision criteria, that gives optimal decisions for any entering condition at any stage. 6. Transformation: normally, an algebraic statement that reveals the relationship between stages.

IN ACTION

Dynamic Programming in Nursery Production Decisions

Managing a nursery that produces ornamental plants is difficult. In most cases, ornamental plants increase in value with increased growth. This value-added growth makes it difficult to determine when to harvest the plants and place them on the market. When plants are marketed earlier, revenues are generated earlier and the costs associated with plant growth are minimized. On the other hand, delaying the harvesting of the ornamental plants usually results in higher prices. But are the additional months of growth and costs worth the delay? In this case, dynamic programming was used to determine the optimal growth stages for ornamental plants. Each stage was associated with a possible growth level. The state variables included acres of production of ornamental plants and carryover plants from previous growing seasons. The ob-

jective of the dynamic programming problem was to maximize the after-tax cash flow. The taxes included self-employment, federal income, earned income credit, and state income taxes. The solution was to produce one- and three-gallon containers of ornamental plants. The one-gallon containers are sold in the fall and carried over for spring sales. Any one-gallon containers not sold in the spring are combined into threegallon container products for sale during the next season. Using dynamic programming helps to determine when to harvest to increase after-tax cash flow. Source: Stokes, Jeffery et al. “Optimal Marketing of Nursery Crops From Container-Based Production Systems,” American Journal of Agricultural Economics (February 1997): 235.

M2.4 Dynamic Programming Notation

M2-7

In the shortest-route problem, the following transformation can be given: distance from the beginning of a given stage to the last node



distance from the beginning of the previous stage  to the last node

distance from the given stage to the previous stage

This relationship shows how we were able to go from one stage to the next in solving for the optimal solution to the shortest-route problem. In more complex problems, we can use symbols to show the relationship between stages. State variables, decision variables, the decision criterion, and the optimal policy can be determined for any stage of a dynamic programming problem. This is done here for stage 2 of the George Yates shortest-route problem. 1. State variables for stage 2 are the entering nodes, which are (a) Node 2 (b) Node 3 (c) Node 4 2. Decision variables for stage 2 are the following arcs or routes: (a) 4–5 (b) 3–5 (c) 3–6 (d) 2–5 (e) 2–6 3. The decision criterion is the minimization of the total distances traveled. 4. The optimal policy for any beginning condition is shown in Figure M2.6 and following the table below. GIVEN THIS ENTERING CONDITION

THIS ARC WILL MINIMIZE TOTAL DISTANCE TO NODE 7

2

2–6

3

3–6

4

4–5

Figure M2.6 may also be helpful in understanding some of the terminology used in the discussion of dynamic programming.

M2.4 DYNAMIC PROGRAMMING NOTATION ● In addition to dynamic programming terminology, we can also use mathematical notation to describe any dynamic programming problem. This helps us to set up and solve the problem. Consider stage 2 in the George Yates dynamic programming problem first discussed in Section M2.2. This stage can be represented by the diagram shown in Figure M2.7 (as could any given stage of a given dynamic programming problem). As you can see, for every stage, we have an input, decision, output, and return. Look again at stage 2 for the George Yates problem in Figure M2.6. The input to this stage is s2, which consists of nodes 2, 3, and 4. The decision at stage 2, or choosing which arc will

An input, decision, output, and return are specified for each stage.

M2-8

Module 2

DYNAMIC PROGRAMMING

FIGURE M2.6 Stage 2 from the ShortestRoute Problem

State variables are the entering nodes.

24

10

4

14 Decision variables are all the arcs.

8 5 12 3 6

1

4

7 6

10

2 2 The optimal policy is the arc, for any entering node, that will minimize total distance to the destination at this stage.

12

Stage 2

lead to stage 1, is represented by d2. The possible arcs or decisions are 4–5, 3–5, 3–6, and 2–6. The output to stage 2 becomes the input to stage 1. The output from stage 2 is s1. The possible outputs from stage 2 are the exiting nodes, nodes 5 and 6. Finally, each stage has a return. For stage 2, the return is represented by r2. In our shortest-route problem, the return is the distance along the arcs in stage 2. These distances are 10 miles for arc 4–5, 12 FIGURE M2.7 Input, Decision, Output, and Return for Stage 2 in George Yates’s Problem

Decision d2

Stage 2 Input s2

Output s1

Return r2

M2.5 Knapsack Problem

M2-9

miles for arc 3–5, 6 miles for arc 3–6, and 10 miles for arc 2–6. The same notation applies for the other stages and can be used at any stage. In general, we will use the following notation for these important concepts: sn  input to stage n

(M2-1)

dn  decision at stage n

(M2-2)

rn  return at stage n

(M2-3)

You should also note that the input to one stage is also the output from another stage. For example, the input to stage 2, s2, is also the output from stage 3 (see Figure M2.7). This leads us to the following equation: sn 1  output from stage n

(M2-4)

The final concept is transformation. The transformation function allows us to go from one stage to another. The transformation function for stage 2, t2, converts the input to stage 2, s2, and the decision made at stage 2, d2, to the output from stage 2, s1. Because the transformation function depends on the input and decision at any stage, it can be represented as t2 (s2, d2). In general, the transformation function can be represented as follows: tn  transformation function at stage n

The input to one stage is the output from another stage.

A transformation function allows us to go from one stage to another.

(M2-5)

The following general formula allows us to go from one stage to another using the transformation function: sn1  tn (sn , dn)

(M2-6)

Although this equation may seem complex, it is really just a mathematical statement of the fact that the output from a stage is a function of the input to the stage and any decisions made at that stage. In the George Yates shortest-route problem, the transformation function consisted of a number of tables. These tables showed how we could progress from one stage to another in order to solve the problem. For more complex problems, we need to use dynamic programming notation instead of tables. Another useful quantity is the total return at any stage. The total return allows us to keep track of the total profit or costs at each stage as we solve the dynamic programming problem. It can be given as follows: fn  total return at stage n

(M2-7)

M2.5 KNAPSACK PROBLEM ● The knapsack problem involves the maximization or the minimization of a value, such as profits or costs. Like a linear programming problem, there are restrictions. Imagine a knapsack or pouch that can only hold a certain weight or volume. We can place different types of items in the knapsack. Our objective will be to place items in the knapsack to maximize total value without breaking the knapsack because of too much weight or a similar restriction.

Types of Knapsack Problems There are many kinds of problems that can be classified as knapsack problems. Choosing items to place in the cargo compartment of an airplane and selecting which payloads to put on the next NASA Space Shuttle are examples. The restriction can be volume, weight,

The total return function allows us to keep track of profits and costs.

M2-10

Module 2

DYNAMIC PROGRAMMING

or both. Some scheduling problems are also knapsack problems. For example, we may want to determine which jobs to complete in the next two weeks. The two-week period is the knapsack, and we want to load it with jobs in such a way as to maximize profits or minimize costs. The restriction is the number of days or hours during the two-week period.

Roller’s Air Transport Service Problem The objective of this problem is to maximize profits.

Rob Roller owns and operates Roller’s Air Transport Service, which ships cargo by plane to most large cities in the United States and Canada. The remaining capacity for one of the flights from Seattle to Vancouver is 10 tons. There are four different items that Rob can ship between Seattle and Vancouver. Each item has a weight in tons, a net profit in thousands of dollars, and a total number of that item that is available for shipping. This information is presented in Table M2.2.

MODELING IN THE REAL WORLD Defining the Problem

Developing a Model

Reducing Electric Production Costs Using Dynamic Programming

The Southern Company, with service areas in Georgia, Alabama, Mississippi, and Florida, is a major provider of electric service, with about 240 generating units. In recent years, fuel costs have increased faster than other costs. Annual fuel costs are about $2.5 billion, representing about one-third of total expenses for the Southern Company. The problem for the Southern Company is to reduce total fuel costs. To deal with this fuel cost problem, the company developed a state-of-the-art dynamic programming model. The dynamic programming model is embedded in the Wescouger optimization program, which is a computer program used to control electric generating units and reduce fuel costs through better utilization of existing equipment.

Acquiring Input Data

Data were collected on past and projected electric usage. In addition, daily load/generation data were analyzed. Load/generation charts were used to investigate the fuel requirements for coal, nuclear, hydroelectric, and gas/oil.

Developing a Solution

The solution of the dynamic programming model provides both short-term planning guidelines and long-term fuel usage for the various generating units. Optimal maintenance schedules for generating units are obtained using Wescouger.

Testing the Solution

Analyzing the Results

Implementing the Results

To test the accuracy of the Wescouger optimization program, Southern used a real-time economic dispatch program. The results were a very close match. In addition, the company put the solution through an acid test, in which seasoned operators compared the results against their intuitive judgment. Again, the results were consistent. The results were analyzed in terms of their impact on the use of various fuels, the usage of various generating units, and maintenance schedules for generating units. Analyzing the results also revealed other needs. This resulted in a full-color screen editing routine, auxiliary programs to automate data input, and software to generate postanalysis reports. The Southern Company implemented the dynamic programming solution. Over a seven-year period, the results saved the company over $140 million.

Source: S. Erwin, et al. “Using an Optimization Software to Lower Overall Electric Production Costs for Southern Company,” Interfaces 21, 1 (January–February 1991): 27–41.

M2.5 Knapsack Problem

TABLE M2.2

TABLE M2.3

Items to Be Shipped

ITEM

WEIGHT

PROFIT/UNIT

NUMBER AVAILABLE

1

1

$3

6

2

4

9

1

3

3

8

2

4

2

5

2

M2-11

Relationship Between Items and Stages

ITEM

STAGE

1

4

2

3

3

2

4

1

Problem Setup Roller’s Air Transport problem is an ideal problem to solve using dynamic programming. Stage 4 will be item 1, stage 3 will be item 2, stage 2 will be item 3, and stage 1 will be item 4. This is shown in Table M2.3. During the solution, we will be using stage numbers. Roller’s Air Transport problem can be represented graphically (see Figure M2.8). As you can see, each item is represented by a stage. Look at stage 4, which is item 1. The total weight that can be used is represented by s4. This amount is 10 because we haven’t assigned any units to be shipped at this time. The decision at this stage is d4 (the number of units of item 1 we will ship). If d4 is 1, for example, we will be shipping 1 unit of item 1. Also note that r4 is the return or profit at stage 4 (item 1). If we ship 1 unit of item 1, the profit will be $3.00 (see Table M2.2). As mentioned previously, the decision variable, dn, represents the number of units of each item (stage) that can be shipped. Looking back at the original problem, we see that the problem is constrained by the number of units. This is summarized in the following table: STAGE

MAXIMUM VALUE OF DECISION

4

6

3

1

2

2

1

2

FIGURE M2.8 Roller’s Air Transport Service Problem

Decisions d4

s4

Stage 4 (Item 1)

r4

d3

s3

Stage 3 (Item 2)

d2

s2

r3

Stage 2 (Item 3)

r2 Returns

d1

s1

Stage 1 (Item 4)

r1

s0

M2-12

Module 2

DYNAMIC PROGRAMMING

The Transformation Functions Next, we need to look at the transformation function. The general transformation function for knapsack problems follows: sn1  an  sn  bn  dn  cn Note that an, bn, and cn are coefficients for the problem, and that dn represents the decision at stage n. This is the number of units to ship at stage sn. The following chart shows the transformation coefficients for Rob Roller’s transport problem: COEFFICIENTS OF TRANSITION FUNCTION STAGE

an

bn

cn

4

1

1

0

3

1

4

0

2

1

3

0

1

1

2

0

First note that s4 is 10, the total weight that can be shipped. Because s4 represents the first item, all 10 tons can be utilized. The transformation equations for the four stages are as follows: s3  s4  1d4

stage 4

(a)

s2  s3  4d3

stage 3

(b)

s1  s2  3d2

stage 2

(c)

s0  s1  2d1

stage 1

(d)

Consider stage 3. Equation b reveals that the number of tons still available after stage 3, s2, is equal to the number of tons available before stage 3, s3, minus the number of tons shipped at stage 3, 4d3. In this equation, 4d3 means that each item at stage 3 weighs 4 tons. The Return Functions Next, we will look at the return function for each stage. This is the general form for the return function: rn  (an  sn )  (bn  dn )  cn Note that an, bn, and cn are the coefficients for the return function. Using this general form of the return function, we can put the return function values in the following table: COEFFICIENTS OF RETURN FUNCTION

DECISIONS STAGE

LOWER

UPPER

an

bn

cn

4

0

 dn 

6

0

3

0

3

0

 dn 

1

0

9

0

2

0

 dn 

2

0

8

0

1

0

 dn 

2

0

5

0

M2-13

M2.5 Knapsack Problem

The lower value for each decision is zero and the upper value is the total number available. The bn coefficient is the profit per item shipped. The actual return functions are r4  3d4 r3  9d3 r2  8d2 r1  5d1 Stage-by-Stage Solution As you would expect, the return at any stage, rn, is equal to the profit per unit at that stage multiplied by the number of units shipped at that stage, dn. With this information, we can solve Roller’s Air Transport problem, starting with stage 1 (item 4). The following table shows the solution for the first stage. You may wish to refer to Figure M2.8 for this discussion. STAGE 1

STAGE 1 S1

d1

r1

s0

f0

f1

S1

d1

r1

s0

f0

f1

0

0

0

0

0

0

7

0

0

0

0

0

1

0

0

0

0

0

1

5

0

0

5

2

0

0

0

0

0

2

10

0

0

10

1

5

0

0

5

0

0

0

0

0

0

0

0

0

0

1

5

0

0

5

1

5

0

0

5

2

10

0

0

10

0

0

0

0

0

0

0

0

0

0

1

5

0

0

5

1

5

0

0

5

2

10

0

0

10

2

10

0

0

10

0

0

0

0

0

0

0

0

0

0

1

5

0

0

5

1

5

0

0

5

2

10

0

0

10

2

10

0

0

10

0

0

0

0

0

1

5

0

0

5

2

10

0

0

10

3

4

5

6

8

9

10

Because we don’t know how many tons will be available at stage 1, we must consider all possibilities. Thus the number of tons available at stage 1, s1, can vary from 1 to 10. This is seen in the first column of numbers for stage 1. The number of units that we ship at stage 1, d1, can vary from 0 to 2. We can’t go over 2 because the number available is only 2. For any decision we compute the return at stage 1, r1, by multiplying the number of items shipped by 5, the profit per item. The profit at this stage will be 0, 5, or 10, depending on whether 0, 1, or 2 items are shipped. Note that the total return at this stage, f1, is the same as r1 because this is the only stage we are considering so far. Also note that the total

We consider all possibilities.

f1 is the total return at the first stage. The total return before the first stage is f0.

M2-14

Module 2

DYNAMIC PROGRAMMING

return before stage 1, f0, is 0 because this is the beginning of the solution and we are shipping nothing at this point. The solution for stage 1 shows the best decision, the return for this stage, and the total return given all possible number of tons available (0 to 10 tons). Using the results of stage 1, we can now proceed to stage 2. The solution for stage 2 is as follows:

STAGE 2

STAGE 2 S2

d2

r2

s1

f1

f2

S2

d2

r2

s1

f1

f2

0

0

0

0

0

0

7

0

0

7

10

10

1

0

0

1

0

0

1

8

4

10

18

2

0

0

2

2

5

2

16

1

0

16

3

0

0

3

5

5

0

0

8

10

10

1

8

0

0

8

1

8

5

10

18

0

0

4

10

10

2

16

2

5

21

1

8

1

0

8

0

0

9

10

10

0

0

5

10

10

1

8

6

10

18

1

8

2

5

13

2

16

3

5

21

0

0

6

10

10

0

0

10

10

10

1

8

3

5

13

1

8

7

10

18

2

16

0

0

16

2

16

4

10

26

4

5

6

8

9

10

The solution for stage 2 is found in exactly the same way as for stage 1. At stage 2 we still have to consider all possible number of tons available (from 0 to 10). See the s2 column (the first column). At stage 2 (item 3) we still only have 2 units that can be shipped. Thus d2 (second column) can range from 0 to a maximum of 2. The return for each s2 and d2 combination at stage 2, r2, is shown in the third column. These numbers are the profit per item at this stage, 8, times the number of items shipped. Because items shipped at stage 2 can be 0, 1, or 2, the profit at this stage can be 0, 8, or 16. The return for stage 2 can also be computed from the return function: r2  8d2. Now look at the fourth column, s1, which lists the number of items available after stage 2. This is also the number of items available for stage 1. To get this number, we have to subtract the number of tons we are shipping at stage 2 (which is the tonnage per unit times the number of units) from the number of tons available before stage 2 (s2). Look at the row in which s2 is 6 and d2 is 1. We have 6 tons available before stage 2, and we are shipping 1 item, which weighs 3 tons. Thus we will have 3 tons still available after this stage. The s1 values can also be determined using the transformation function, which is s1  s2  3d2. The last two columns of stage 2 contain the total return. The return after stage 1 and before stage 2 is f1. These are the same values that appeared under the f1 column for stage 1. The return after stage 2 is f2. It is equal to the return from stage 2 plus the total return before stage 2.

M2-15

M2.5 Knapsack Problem

Stage 3 is solved in the same way as stages 1 and 2. The following table presents the solution for stage 3; look at each row and make sure that you understand the meaning of each value.

STAGE 3

STAGE 3

s3

d3

r3

s2

f2

f3

0

0

0

0

0

0

1

0

0

1

0

0

2

0

0

2

2

5

3

0

0

3

8

8

4

0

0

4

10

10

1

9

0

0

9

0

0

5

13

13

1

9

1

0

9

0

0

6

16

16

1

9

2

5

14

5

6

s3 7

8

9

10

d3

r3

s2

f2

f3

0

0

7

18

18

1

9

3

8

17

0

0

8

21

21

1

9

4

10

19

0

0

9

21

21

1

9

5

13

22

0

0

10

26

26

1

9

6

16

25

Now we solve the last stage of the problem, stage 4. The following table shows the solution procedure:

STAGE 4 S4

d4

r4

s3

f3

f4

10

0

0

10

26

26

1

3

9

22

25

2

6

8

21

27

3

9

7

18

27

4

12

6

16

28

5

15

5

13

28

6

18

4

10

28

The first thing to note is that we only have to consider one value for s4, because we know the number of tons available for stage 4; s4 must be 10 because we have all 10 tons available. There are six possible decisions at this stage, or six possible values for d4, because the number of available units is 6. The other columns are computed in the same way. Note that the return after stage 4, f4, is the total return for the problem. We see that the highest profit is 28. We also see that there are three possible decisions that will give

M2-16

Module 2

DYNAMIC PROGRAMMING

this level of profit, shipping 4, 5, or 6 items. Thus we have alternate optimal solutions. One possible solution is as follows: FINAL SOLUTION STAGE

OPTIMAL DECISION

OPTIMAL RETURN

4

6

18

3

0

0

2

0

0

1

2

10

Total

8

28

We start with shipping 6 units at stage 4. Note that s3 is 4 from the stage 4 calculations, given that d4 is 6. We use this value of 4 and go to the stage 3 calculations. We find the rows in which s3 is 4 and pick the row with the highest total return, f3. In this row d3 is 0 items with a total return (f3) of 10. As a result, the number of units available, s2, is still 4. We then go to the calculations for stage 2 and then stage 1 in the same way. This gives us the optimal solution already described. See if you can find one of the alternate optimal solutions.

Summary Dynamic programming is a flexible, powerful technique. A large number of problems can be solved using this technique, including the shortest-route and knapsack problems. The shortest-route problem finds the path through a network that minimizes total distance traveled, while the knapsack problem maximizes value or return. An example of a knapsack problem is to determine what to ship in a cargo plane to maximize total profits given the weight and size constraints of the cargo plane. The dynamic programming technique requires four steps: (1) Divide the original problem into stages, (2) solve the last stage first, (3) work backward solving each subsequent stage, and (4) obtain the optimal solution after all stages have been solved. Dynamic programming requires that we specify stages, state variables, decision variables, decision criteria, an optimal policy, and a transformation function for each specific problem. Stages are logical subproblems. State variables are possible input values or beginning conditions. The decision variables are the alternatives that we have at each stage, such as which route to take in a shortest-route problem. The decision criterion is the objective of the problem, such as finding the shortest route or maximizing total return in a knapsack problem. The optimal policy helps us obtain an optimal solution at any stage, and the transformation function allows us to go from one stage to the next.

Glossary Dynamic Programming. A quantitative technique that works backward from the end of the problem to the beginning of the problem in determining the best decision for a number of interrelated decisions. Stage. A logical subproblem in a dynamic programming problem. State Variable. A term used in dynamic programming to describe the possible beginning situations or conditions of a stage.

Solved Problem

M2-17

Decision Variable. The alternatives or possible decisions that exist at each stage of a dynamic programming problem. Decision Criterion. A statement concerning the objective of a dynamic programming problem. Optimal Policy. A set of decision rules, developed as a result of the decision criteria, that gives optimal decisions at any stage of a dynamic programming problem. Transformation. An algebraic statement that shows the relationship between stages in a dynamic programming problem.

Key Equations (M2-1) sn  Input to stage n The input to stage n. This is also the output from stage n  1. (M2-2) dn  Decision at stage n The decision at stage n. (M2-3) rn  Return at stage n The return function, usually profit or loss, at stage n. (M2-4) sn1  Output from stage n The output from stage n. This is also the input to stage n  1. (M2-5) tn  Transformation function at stage n A transformation function that allows us to go from one stage to another. (M2-6) sn1  tn(sn , dn) The general relationship that shows how the output from any stage is a function of the input to the stage and the decisions made at that stage. (M2-7) fn  Total return at stage n This equation gives the total return (profit or costs) at any stage. It is obtained by summing the return at each stage, rn.

Solved Problem Solved Problem M2-1 Lindsey Cortizan would like to travel from the university to her hometown over the holiday season. The road map from the university (node 1) to her home (node 7) is shown in Figure M2.9. What is the best route that will minimize total distance traveled?

4

FIGURE M2.9 Road Map for Lindsey Cortizan

39

20 6 13

28 1

18 3

7

36 10

22

18 2

5

M2-18

Module 2

DYNAMIC PROGRAMMING

FIGURE M2.10 Solution for the Lindsey Cortizan Problem

52 4

39 13

20 50 1

41

6 13

28

18 3

7

36

10

22 18

5

2 10 28 stage 3

stage 2

stage 1

Solution The solution to this problem is identical to the one presented earlier in the chapter. First, the problem is broken into three stages. See the network in Figure M2.10. Working backward, we start by solving stage 3. The closest and only distance from node 6 to node 7 is 13, and the closest and only distance from node 5 to node 7 is 10. We proceed to stage 2. The minimum distances are 52, 41, and 28 from node 4, node 3, and node 2 to node 7. Finally, we complete stage 3. The optimal solution is 50. The shortest route is 1–2–5–7 and is shown in the following network. This problem can also be solved using tables, as shown following the network solution. Problem type Number of stages Transition function type Recursion function type

minimization network 3 sn1  sn  dn fn  rn  fn1

STAGE

NUMBER OF DECISIONS

3

3

2

4

1

2

STAGE

STARTING NODE

ENDING NODE

RETURN VALUE

3

1

2

22

1

3

18

1

4

20

2

5

18

2

6

36

3

6

28

4

6

39

5

7

10

6

7

13

2

1

Solved Problem

STAGE 1 s1

d1

r1

s0

f0

f1

6

6–7

13

7

0

13

5

5–7

10

7

0

10

STAGE 2 s2

d2

r2

s1

f1

f2

4

4–6

39

6

13

52

3

3–6

28

6

13

41

2

2–6

36

6

13

49

2–5

18

5

10

28

STAGE 3 s3

d3

r3

s2

f2

f3

1

1–4

20

4

51

71

1–3

18

3

41

59

1–2

22

2

28

50

FINAL SOLUTION STAGE

OPTIMAL DECISION

OPTIMAL RETURN

3

1–2

22

2

2–5

18

1

5–7

10

Total

50

M2-19

M2-20

Module 2

DYNAMIC PROGRAMMING

SELF-TEST • • •

Before taking the self-test, refer back to the learning objectives at the beginning of the module, the notes in the margins, and the glossary at the end of the module. Use the key at the back of the book to correct your answers. Restudy pages that correspond to any questions that you answered incorrectly or material you feel uncertain about.

1. Dynamic programming divides problems into a. nodes. b. arcs. c. decision stages. d. branches. e. variables. 2. Possible beginning situations or conditions of a dynamic programming problem are called a. stages. b. state variables. c. decision variables. d. optimal policy. e. transformation. 3. The statement concerning the objective of a dynamic programming problem is called a. stages. b. state variables. c. decision variables. d. optimal policy. e. decision criterion. 4. The first step of a dynamic programming problem is to a. define the nodes. b. define the arcs. c. divide the original problem into stages. d. determine the optimal policy. e. none of the above. 5. Working forward from the first stage to the ending stage a. is done for large dynamic programming problems to achieve a solution. b. is done for any dynamic programming problem. c. is the first step of a dynamic programming problem solution. d. is the last step of a dynamic programming problem solution. e. none of the above. 6. An algebraic statement that reveals the relationship between stages is called a. the transformation. b. state variables. c. decision variables. d. the optimal policy. e. the decision criterion. 7. In this chapter, dynamic programming was used to solve what type of problem? a. quantity discount b. just-in-time inventory c. shortest-route d. minimal spanning tree e. maximal flow

8. In dynamic programming terminology, a period or logical subproblem is called a. the transformation. b. a state variable. c. a decision variable. d. the optimal policy. e. a stage. 9. The statement that the distance from the beginning stage is equal to the distance from the preceding stage to the last node plus the distance from the given stage to the preceding stage is called a. the transformation. b. state variables. c. decision variables. d. the optimal policy. e. stages. 10. In dynamic programming, sn is a. the input to the stage n. b. the decision at stage n. c. the return at stage n. d. the output of stage n. e. none of the above. 11. The relationship that the distance from the beginning stage is equal to the distance from the preceding stage to the last node plus the distance for the given stage to the preceding stage is used to solve which type of problem? a. knapsack b. JIT c. shortest-route d. minimal spanning tree e. maximal flow 12. In dynamic programming, rn is a. the input to the stage n. b. the decision at stage n. c. the return at stage n. d. the output of stage n. e. none of the above. 13. In dynamic programming, on is a. the input to the stage n. b. the decision at stage n. c. the return at stage n. d. the output of stage n. e. none of the above. 14. In dynamic programming, dn is a. the input to the stage n. b. the decision at stage n. c. the return at stage n. d. the output of stage n. e. none of the above.

Discussion Questions and Problems

Discussion Questions and Problems Discussion Questions M2-1

What is a stage in dynamic programming?

M2-2

What is the difference between a state variable and a decision variable?

M2-3

Describe the meaning and use of a decision criterion.

M2-4

Do all dynamic programming problems require an optimal policy?

M2-5

Why is transformation important for dynamic programming problems?

Problems •• M2-6 Refer to Figure M2.1. What is the shortest route between Rice and Dixieville if the road between Hope and Georgetown is improved and the distance is reduced to 4 miles?

••M2-7

Due to road construction between Georgetown and Dixieville, a detour must be taken through country roads (Figure M2.1). Unfortunately, this detour has increased the distance from Georgetown to Dixieville to 14 miles. What should George do? Should he take a different route?

•• M2-8

The Rice Brothers have a gold mine between Rice and Brown. In their zeal to find gold, they have blown up the road between Rice and Brown. The road will not be in service for five months. What should George do? Refer to Figure M2.1.

•• M2-9

Solve the shortest-route problem of Figure M2.11.

FIGURE M2.11 (for Problem M2-9)

4 2

10 4 2

7 6

4

5

12

1

5 4

9 6

11 3

4 8

6

10 6

M2-21

M2-22

Module 2

DYNAMIC PROGRAMMING

• M2-10 Solve the shortest-route problem of Figure M2.12. FIGURE M2.12 (for Problem M2-10)

7

6

10

5 3

2

3

6

5

5

9

1

3 2

3

4

2

3

8

2

11

4

2

3

4 7

• M2-11 Mail Express, an overnight mail service, delivers mail to customers throughout the United States, Canada, and Mexico. Fortunately, Mail Express has additional capacity on one of its cargo planes. To maximize profits, Mail Express takes shipments from local manufacturing plants to warehouses for other companies. Currently, there is room for another 6 tons. The following table shows the items that can be shipped, their weights, the expected profit for each, and the number of available parts. How many units of each item do you suggest that Mail Express ship? ITEMS TO BE SHIPPED ITEM

WEIGHT (TONS)

PROFIT/UNIT

NUMBER AVAILABLE

1

1

$3

6

2

2

9

1

3

3

8

2

4

1

2

2

•• M2-12 Leslie Bessler must travel from her hometown to Denver to see her friend Austin. Given the road map of Figure M2.13, what route will minimize the distance that she travels?

FIGURE M2.13 (for Problem M2-12)

4

3

3

9

2

6 2

4 1

1

11

3

3 12

3

2

4 5

3

3

8

4

3

3 10

2

2 7

Discussion Questions and Problems

••M2-13 An air cargo company has the following shipping requirements. Two planes are available with a total capacity of 11 tons. How many of each item should be shipped to maximize profits? ITEMS TO BE SHIPPED ITEM

WEIGHT (TONS)

PROFIT/UNIT

NUMBER AVAILABLE

1

1

$3

6

2

2

9

1

3

3

8

2

4

2

5

5

5

5

8

6

6

1

2

2

••M2-14 Because of a new manufacturing and packaging procedure, the weight of item 2 in Problem M2-13 can be cut in half. Does this change the number or types of items that can be shipped by the air transport company?

•• •M2-15 What is the shortest route through the network of Figure M2.14?

3 3

13

8

16 3

1

2

4

4

2

11

15 6

19 2

4 5

4 2

3

2

20

2

3 10

5

2

3

3 3

3

18 3

7 4

3

4

12

4 4

FIGURE M2.14 (for Problem M2-15)

4

17

14

9

••M2-16 The road between node 6 and node 11 is no longer in service due to construction. • (Refer to Problem M2-15.) What is the shortest route given this situation?

M2-23

M2-24

Module 2

DYNAMIC PROGRAMMING

Case Study United Trucking Like many trucking operations, United Trucking got started with one truck and one owner—Judson Maclay. Judson is an individualist and always liked to do things his way. He was a fast driver, and many people called the 800 number on the back of his truck when he worked for Hartmann Trucking. After two years with Hartmann and numerous calls about his bad driving, Judson decided to go out on his own. United Trucking was the result. In the early days of United Trucking, Judson was the only driver. On the back of his truck was the message: How do you like my driving? Call 1-800-AMI-FAST. He was convinced that some people actually tried to call the number. Soon, a number of truck operators had the same or similar messages on the back of their trucks. After three years of operation, Judson had 15 other trucks and drivers working for him. He traded his driving skills for office and management skills. Although 1-800-AMI-FAST was no longer visible, Judson decided to never place an 800 number on the back of any of his trucks. If someone really wanted to complain, they could look up United Trucking in the phone book. Judson liked to innovate with his trucking company. He knew that he could make more money by keeping his trucks full. Thus he decided to institute a discount trucking service. He gave substantial discounts to customers that would accept delivery to the West Coast within two weeks. Customers got a great price, and he made more money and kept his trucks full. Over time, Judson developed steady customers that would usually have loads to go at the discounted price. On one ship-

ment, he had an available capacity of 10 tons in several trucks going to the West Coast. Ten items can be shipped at discount. The weight, profit, and number of items available are shown in the following table. ITEM

WEIGHT (TONS)

PROFIT/UNIT

AVAILABLE

1

1

$10

2

2

1

10

1

3

2

5

3

4

1

7

20

5

3

25

2

6

1

11

1

7

4

30

2

8

3

50

1

9

1

10

2

10

1

5

4

Discussion Questions 1. What do you recommend Judson should do? 2. If the total available capacity was 20 tons, how would this change Judson’s decision?

INTERNET CASE STUDY See our Internet home page at http://www.prenhall.com/render for this additional case study: Briarcliff Electronics.

Bibliography Bellman, R. E. Dynamic Programming. Princeton, NJ: Princeton University Press, 1957.

of Operations Research (January 1993): 199.

Bourland, Karla. “Parallel-Machine Scheduling with Fractional Operator Requirements,” IEE Transactions (September 1994): 56.

El-Rayes, Khaled, et al. “Optimized Scheduling for Highway Construction,” Transactions of AACE International (February 1997): 311.

Carraway, R. L. “A Dynamic Programming Approach to Stochastic Assembly Line Balancing,” Management Science 35 (April 1989): 459–471.

Hillard, Michael R., et al. “Scheduling the Operation Desert Storm Airlift: An Advanced Automated Scheduling Support System,” Interfaces 21, 1 (January–February 1992): 131–146.

Elmaghraby, Salah. “Resource Allocation via Dynamic Programming,” European Journal

Howard, R. A. Dynamic Programming. Cambridge, MA: The MIT Press, 1960.

Bibliography

Ibarake, Toshihide. “A Dynamic Programming Method for Single Machine Scheduling,” European Journal of Operations Research (July 1994): 72.

Stokes, Jeffery et al. “Optimal Marketing of Nursery Crops from Container-Based Production Systems,” American Journal of Agricultural Economics (February 1997): 235.

Idem, Fidelis. “An Approach to Planning for Physician Requirements in Developing Countries Using Dynamic Programming,” Operations Research (July 1992): 607.

Toelle, Richard. “A Dynamic Programming Method for Determining Optimum Liquidation Quantities for Slow-Moving Inventories,” Computers and Industrial Engineering (July 1992): 353.

M2-25

M O D U L E

3

Decision Theory and the Normal Distribution

LEARNING OBJECTIVES After completing this module, students will be able to: 1. Understand how the normal curve can be used in performing break-even analysis.

2. Compute the expected value of perfect information (EVPI) using the normal curve.

3. Perform marginal analysis where products have a constant marginal profit and loss.

MODULE OUTLINE M3.1 Introduction M3.2 Break-Even Analysis and the Normal Distribution M3.3 EVPI and the Normal Distribution Summary • Glossary • Key Equations • Solved Problems • Self-Test • Discussion Questions and Problems • Bibliography

Appendix M3.1: Derivation of the Break-Even Point

M3-1

M3-2

Module 3

DECISION THEORY

AND THE

NORMAL DISTRIBUTION

M3.1 INTRODUCTION ● The normal distribution can be used when there are a large number of states and/or alternatives.

In Chapters 3 and 4 in your text we looked at examples that dealt with only a small number of states of nature and decision alternatives. But what if there were 50, 100, or even 1,000s of states and/or alternatives? If you used a decision tree or decision table, solving the problem would be virtually impossible. This module shows how decision theory can be extended to handle problems of such magnitude. We begin with the case of a firm facing two decision alternatives under conditions of numerous states of nature. The normal probability distribution, which is widely applicable in business decision making, is first used to describe the states of nature.

M3.2 BREAK-EVEN ANALYSIS AND THE NORMAL DISTRIBUTION ● Break-even analysis, often called cost-volume analysis, answers several common management questions relating the effect of a decision to overall revenues or costs. At what point will we break even, or when will revenues equal costs? At a certain sales volume or demand level, what revenues will be generated? If we add a new product line, will this action increase revenues? In this section we look at the basic concepts of break-even analysis and explore how the normal probability distribution can be used in the decisionmaking process.

Barclay Brothers New Product Decision The Barclay Brothers Company is a large manufacturer of adult parlor games. Its marketing vice president, Rudy Barclay, must make the decision whether or not to introduce a new game called Strategy into the competitive market. Naturally, the company is concerned with costs, potential demand, and profit it can expect to make if it markets Strategy. Rudy identifies the following relevant costs: fixed cost  $36,000

(costs that do not vary with volume produced, such as new equipment, insurance, rent, and so on)

variable cost per game produced  $4

(costs that are proportional to the number of games produced, such as materials and labor)

The selling price per unit is set at $10. The break-even point is that number of games at which total revenues are equal to total costs. It can be expressed.1 fixed cost break-even point (units)  (M3-1) price/unit  variable cost/unit So in Barclay’s case, break-even point (games) 

$36,000 $36,000  $10  $4 $6

 6,000 games of Strategy

1

For a detailed explanation of the break-even equation, see Appendix M3.1 at the end of this module.

M3.2 Break-Even Analysis and the Normal Distribution

M3-3

Any demand for the new game that exceeds 6,000 units will result in a profit, whereas a demand less than 6,000 units will cause a loss. For example, if it turns out that demand is 11,000 games of Strategy, Barclay’s profit would be $30,000. Revenue (11,000 games  $10/game)

$110,000

Less expenses Fixed cost Variable cost (11,000 games  $4/game)

$36,000 $44,000

Total expense

$ 80,000

Profit

$ 30,000

If demand is exactly 6,000 games (the break-even point), you should be able to compute for yourself that profit equals $0. Rudy Barclay now has one useful piece of information that will help him make the decision about introducing the new product. If demand is less than 6,000 units, a loss will be incurred. But actual demand is not known. Rudy decides to turn to the use of a probability distribution to estimate demand.

Probability Distribution of Demand Actual demand for the new game can be at any level—0 units, 1 unit, 2 units, 3 units, up to many thousands of units. Rudy needs to establish the probability of various levels of demand in order to proceed. In many business situations the normal probability distribution is used to estimate the demand for a new product. It is appropriate when sales are symmetric around the mean expected demand and follow a bell-shaped distribution. Figure M3.1 illustrates a typical normal curve that we discussed at length in Chapter 2. Each curve has a unique shape that depends on two factors: the mean of the distribution (m) and the standard deviation of the distribution (s). For Rudy Barclay to use the normal distribution in decision making, he must be able to specify values for m and s. This isn’t always easy for a manager to do directly, but if he or she has some idea of the spread, an analyst can determine the appropriate values. In the Barclay example, Rudy might think that the most likely sales figure is 8,000 but that demand might go as low as 5,000 or as high as 11,000. Sales could conceivably go even beyond those limits; say, there is a 15% chance of being below 5,000 and another 15% chance of being above 11,000.

  Standard Deviation of Demand Describes Spread

  Mean Demand Describes Center of Distribution

The normal distribution can be used to estimate demand.

FIGURE M3.1 Shape of a Typical Normal Distribution

M3-4

Module 3

DECISION THEORY

AND THE

NORMAL DISTRIBUTION

FIGURE M3.2 Normal Distribution for Barclay’s Demand

Mean of the Distribution,  15% Chance Demand Exceeds 11,000 Games

15% Chance Demand is Less Than 5,000 Games

5,000

8,000



11,000

X Demand (Games)

Because this is a symmetric distribution, Rudy decides that a normal curve is appropriate. In Chapter 2, we saw how to take the data in a normal curve such as Figure M3.2 and compute the value of the standard deviation. The formula for calculating the number of standard deviations that any value of demand is away from the mean is Z

demand  m s

(M3-2)

where Z is the number of standard deviations above or below the mean, m. It is provided in the table in Appendix A at the end of this text. We see that the area under the curve to the left of 11,000 units demanded is 85% of the total area, or 0.85. From Appendix A, the Z value for 0.85 is approximately 1.04. This means that a demand of 11,000 units is 1.04 standard deviations to the right of the mean, m. With m  8,000, Z  1.04, and a demand of 11,000, we can easily compute s. Z

demand  m s

or 1.04 

11,000  8,000 s

or 1.04s  3,000 or s

3,000  2,885 units 1.04

At last, we can state that Barclay’s demand appears to be normally distributed, with a mean of 8,000 games and a standard deviation of 2,885 games. This allows us to answer some questions of great financial interest to management, such as what the probability is of breaking even. Recalling that the break-even point is 6,000 games of Strategy, we must find the number of standard deviations from 6,000 to the mean. Z 

break-even point  m s 6,000  8,000 2,000   0.69 2,885 2,885

M3.2 Break-Even Analysis and the Normal Distribution

M3-5

FIGURE M3.3 Probability of Breaking Even for Barclay’s New Game Loss Area  24.51%

Break-Even 6,000 units

Profit Area  75.49%



This is represented in Figure M3.3. Because Appendix A is set up to handle only positive Z values, we can find the Z value for 0.69, which is 0.7549 or 75.49% of the area under the curve. The area under the curve for 0.69 is just 1 minus the area computed for 0.69, or 1  0.7549. Thus, 24.51% of the area under the curve is to the left of the breakeven point of 6,000 units. Hence, P(loss)  P(demand  break-even)  0.2451

Computing the probability of making a profit.

 24.51% P(profit)  P(demand  break-even)  0.7549  75.49% The fact that there is a 75% chance of making a profit is useful management information for Rudy to consider. Before leaving the topic of break-even analysis, we should point out two caveats: 1. We have assumed that demand is normally distributed. If we should find that this is not reasonable, other distributions may be applied. These are beyond the scope of this book. 2. We have assumed that demand was the only random variable. If one of the other variables (price, variable cost, or fixed costs) were a random variable, a similar procedure could be followed. If two or more variables are both random, the mathematics becomes very complex. This is also beyond our level of treatment.

Using EMV to Make a Decision In addition to knowing the probability of suffering a loss with Strategy, Barclay is concerned about the expected monetary value (EMV) of producing the new game. He knows, of course, that the option of not developing Strategy has an expected monetary value of $0. That is, if the game is not produced and marketed, his profit will be $0. If, however, the EMV of producing the game is greater than $0, he will recommend the more profitable strategy. To compute the EMV for this strategy, Barclay uses the expected demand, m, in the following linear profit function: EMV  (price/unit  variable cost/unit)  (mean demand)  fixed costs  ($10  $4)(8,000 units)  $36,000  $48,000  $36,000  $12,000

(M3-3)

Computing EMV.

M3-6

Module 3

DECISION THEORY

AND THE

NORMAL DISTRIBUTION

Rudy has two choices at this point. He can recommend that the firm proceed with the new game; if so, he estimates there is a 75% chance of at least breaking even and an EMV of $12,000. Or, he might prefer to do further marketing research before making a decision. This brings up the subject of the expected value of perfect information.

M3.3 EVPI AND THE NORMAL DISTRIBUTION ● Let’s return to the Barclay Brothers problem to see how to compute the expected value of perfect information (EVPI) and expected opportunity loss (EOL) associated with introducing the new game. The two steps follow:

Two Steps to Compute EVPI and EOL 1. Determine the opportunity loss function. 2. Use the opportunity loss function and the unit normal loss integral (given in Appendix B at the end of the book) to find EOL, which is the same as EVPI.

Opportunity Loss Function The opportunity loss function describes the loss that would be suffered by making the wrong decision. We saw earlier that Rudy’s break-even point is 6,000 sets of the game Strategy. If Rudy produces and markets the new game and sales are greater than 6,000 units, he has made the right decision; in this case there is no opportunity loss ($0). If, however, he introduces Strategy and sales are less than 6,000 games, he has selected the wrong alternative. The opportunity loss is just the money lost if demand is less than the break-even point; for example, if demand is 5,999 games, Barclay loses $6 ( $10 price/unit  $4 cost/unit). With a $6 loss for each unit of sales less than the break-even point, the total opportunity loss is $6 multiplied times the number of units under 6,000. If only 5,000 games are sold, the opportunity loss will be 1,000 units less than the breakeven point times $6 per unit  $6,000. For any level of sales, X, Barclay’s opportunity loss function can be expressed as follows: opportunity loss 

 X) $6(6,000 $0

for X  6,000 games for X  6,000 games

In general, the opportunity loss function can be computed by Opportunity loss 

K$0(break-even point  X)

for X  break-even point for X  break-even point

where K  loss per unit when sales are below the break-even point X  sales in units

(M3-4)

M3.3 EVPI and the Normal Distribution

M3-7

Expected Opportunity Loss The second step is to find the expected opportunity loss. This is the sum of the opportunity losses multiplied by the appropriate probability values. But in Barclay’s case there are a very large number of possible sales values. If the break-even point is 6,000 games, there will be 6,000 possible sales values, from 0, 1, 2, 3, up to 6,000 units. Thus, determining the EOL would require setting 6,000 probability values that correspond to the 6,000 possible sales values. These numbers would be multiplied and added together, a very lengthy and tedious task. When we assume that there are an infinite (or very large) number of possible sales values that follow a normal distribution, the calculations are much easier. Indeed, when the unit normal loss integral is used, EOL can be computed as follows: EOL  KsN(D)

Using the unit normal loss integral.

(M3-5)

where EOL  expected opportunity loss K  loss per unit when sales are below the break-even point s  standard deviation of the distribution D



m  break-even point s



(M3–6)

where   absolute value sign m  mean sales N(D)  value for the unit normal loss integral in Appendix B for a given value of D Here is how Rudy can compute EOL for his situation: K  $6 s  2,885 D





8,000  6,000  0.69  0.60  0.09 2,885

Now refer to the unit normal loss integral table. Look in the “0.6” row and read over to the “0.09” column. This is N(0.69), which is 0.1453. N(0.69)  0.1453 Therefore, EOL  KN(0.69)  ($6)(2,885)(0.1453)  $2,515.14 Because EVPI and EOL are equivalent, the expected value of perfect information is also $2,515.14. This is the maximum amount that Rudy should be willing to spend on additional marketing information. The relationship between the opportunity loss function and the normal distribution is shown in Figure M3.4. This graph shows both the opportunity loss and the normal distribution with a mean of 8,000 games and a standard deviation of 2,885. To the right of the break-even point we note that the loss function is 0. To the left of the break-even point,

EVPI and EOL are equivalent.

M3-8

Module 3

DECISION THEORY

FIGURE M3.4 Barclay’s Opportunity Loss Function

AND THE

NORMAL DISTRIBUTION

  $6 (6,000  X) for x ≤ 6,000 games Opportunity Loss   $0 for x > 6,000

  8,000 Games

Loss ($)

Normal Distribution

  8,000   2,885 Slope 6 6,000



X Demand (Games)

Break-Even Point (XB)

the opportunity loss function increases at a rate of $6 per unit, hence the slope of 6. The use of Appendix B and Equation M3-5 allows us to multiply the $6 unit loss times each of the probabilities between 6,000 units and 0 units and to sum these multiplications.

Summary In this module we looked at decision theory problems that involved many states of nature and alternatives. As an alternative to decision tables and decision trees, we learned to use the normal distribution to solve break-even problems and find the expected monetary value and EVPI. We need to know the mean and standard deviation of the normal distribution and to be certain that it is the appropriate probability distribution to apply. Other continuous distributions can also be used, but they are beyond the level of this book.

Glossary Break-Even Analysis. The analysis of relationships between profit, costs, and demand level. Opportunity Loss Function. A function that relates opportunity loss in dollars to sales in units. Unit Normal Loss Integral. A table that is used in the determination of EOL and EVPI.

Key Equations (M3-1) Break-even point (in units) 

fixed cost price/unit  variable cost/unit

The formula that provides the volume at which total revenue equals total costs. (M3-2) Z 

demand  m s

The number of standard deviations that demand is from the mean, m.

Solved Problems

(M3-3) EMV  (price/unit  variable cost/unit)  (mean demand)  fixed costs The expected monetary value. (M3-4) Opportunity loss 



K (break-even point  X) for X  break-even point $0 for X  break-even point

The opportunity loss function. (M3-5) EOL  KN(D) The expected opportunity loss. (M3-6) D 



m  break-even point s



An intermediate value used to compute EOL.

Solved Problems Solved Problem M3-1 Terry Wagner is considering self-publishing a book on yoga. She has been teaching yoga for more than 20 years. She believes that the fixed costs of publishing the book will be about $10,000. The variable costs are $5.50, and the price of the yoga book to bookstores is expected to be $12.50. What is the break-even point for Terry? Solution This problem can be solved using the break-even formulas in the supplement, as follows: Break-even point (BEP) in units  BEP 

$10,000 $12.50  $5.50 $10,000 $7

 1,429 units Solved Problem M3-2 In this supplement we discussed how we could use the normal curve to determine the expected opportunity loss (EOL). We have determined that D  0.70, the standard deviation is 1,500, and K is $10. Given these data, determine EOL. Solution The first step is to go to Appendix B and get a value of N(D) when D is 0.7. Looking at Appendix B for the unit normal loss integral, we see that N(D) is 0.1429. This value can be placed into the equation for EOL as follows: EOL  $10(1,500)(0.1429)  $2,143.50

M3-9

M3-10

Module 3

DECISION THEORY

AND THE

NORMAL DISTRIBUTION

SELF-TEST • • •

Before taking the self-test, refer back to the learning objectives at the beginning of the module and the glossary at the end of the module. Use the key at the back of the book to correct your answers. Restudy pages that correspond to any questions that you answered incorrectly or material you feel uncertain about.

1. Another name for break-even analysis is a. normal analysis. b. variable cost analysis. c. cost-volume analysis. d. standard analysis. e. probability analysis. 2. Which of the following is not needed to compute the break-even point? a. fixed costs b. price/unit c. probability of break-even d. variable cost/unit e. All of the above are needed to compute the breakeven point. 3. What is the demand minus the mean divided by the standard deviation? a. Z b. the value for K c. the value for the break-even point d. the loss area e. the profit area 4. What probability distribution is typically used with break-even analysis and decision theory? a. uniform distribution b. normal distribution c. binomial distribution d. exponential distribution e. no distribution is used

5. The price/unit minus the variable cost/unit times the mean demand minus the fixed cost is a. the break-even point. b. the standard deviation. c. a value for Z. d. the expected opportunity loss. e. the expected monetary value. 6. The EOL is the same numerically as which of the following? a. EMV b. Z c. the break-even point d. EVPI e. none of the above 7. In determining the opportunity loss function, the variable K represents a. the loss per unit when sales are below the breakeven point. b. the profit per unit when sales are above the breakeven point. c. the profit per unit when sales are below the breakeven point. d. unit normal loss integral. e. none of the above. 8. Appendix A is used to get a value for ________________ .

Discussion Questions and Problems

Discussion Questions and Problems Discussion Questions M3-1 M3-2 M3-3 M3-4

What is the purpose of conducting break-even analysis? Under what circumstances can the normal distribution be used in break-even analysis? What does it usually represent? What assumption do you have to make about the relationship between EMV and a state of nature when you are using the mean to determine the value of EMV? Describe how EVPI can be determined when the distribution of the states of nature follows a normal distribution.

Problems • M3-5

A publishing company is planning on developing an advanced quantitative analysis book for graduate students in doctoral programs. The company estimates that sales will be normally distributed, with mean sales of 60,000 copies and a standard deviation of 10,000 books. The book will cost $16 to produce and will sell for $24; fixed costs will be $160,000. (a) What is the company’s break-even point? (b) What is the EMV? •• M3-6 Refer to Problem M3-5. (a) What is the opportunity loss function? (b) Compute the expected opportunity loss. (c) What is the EVPI? (d) What is the probability that the new book will be profitable? (e) What do you recommend that the firm do? •• M3-7 Barclay Brothers Company, the firm discussed in this module, thinks it underestimated the mean for its game Strategy. Rudy Barclay thinks expected sales may be 9,000 games. He also thinks that there is a 20% chance that sales will be less than 6,000 games and a 20% chance that he can sell more than 12,000 games. (a) What is the new standard deviation of demand? (b) What is the probability that the firm will incur a loss? (c) What is the EMV? (d) How much should Rudy be willing to pay now for a marketing research study? •• M3-8 True-Lens, Inc., is considering producing long-wearing contact lenses. Fixed costs will be $24,000 with a variable cost per set of lenses of $8. The lenses will sell for $24.00 per set to optometrists. (a) What is the firm’s break-even point? (b) If expected sales are 2,000 sets, what should True-Lens do, and what are the expected profits? • M3-9 Leisure Supplies produces sinks and ranges for travel trailers and recreational vehicles. The unit price on their double sink is $28 and the unit cost is $20. The fixed cost in producing the double sink is $16,000. Mean sales for double sinks have been 35,000 units, and the standard deviation has been estimated to be 8,000 sinks. Determine the expected monetary value for these sinks. If the standard deviation were actually 16,000 units instead of 8,000 units, what effect would this have on the expected monetary value? •• M3-10 Belt Office Supplies sells desks, lamps, chairs, and other related supplies. The company’s executive lamp sells for $45, and Elizabeth Belt has determined that the breakeven point for executive lamps is 30 lamps per year. If Elizabeth does not make the break-even point, she loses $10 per lamp. The mean sales for executive lamps has been 45, and the standard deviation is 30. (a) Determine the opportunity loss function. (b) Determine the expected opportunity loss. (c) What is the EVPI?

M3-11

M3-12

Module 3

DECISION THEORY

AND THE

NORMAL DISTRIBUTION

•• M3-11 Elizabeth Belt is not completely certain that the loss per lamp is $10 if sales are below the break-even point (refer to Problem M3-10). The loss per lamp could be as low as $8 or as high as $15. What effect would these two values have on the expected opportunity loss?

• M3-12 Leisure Supplies is considering the possibility of using a new process for producing sinks. This new process would increase the fixed cost by $16,000. In other words, the fixed cost would double (see Problem M3-9). This new process will improve the quality of the sinks and reduce the cost it takes to produce each sink. It will cost only $19 to produce the sinks using the new process. (a) What do you recommend? (b) Leisure Supplies is considering the possibility of increasing the purchase price to $32 using the old process given in Problem M3-9. It is expected that this will lower the mean sales to 26,000 units. Should Leisure Supplies increase the selling price?

•• M3-13 Quality Cleaners specializes in cleaning apartment units and office buildings. Although the work is not too enjoyable, Joe Boyett has been able to realize a considerable profit in the Chicago area. Joe is now thinking about opening another Quality Cleaners in Milwaukee. To break even, Joe would need to get 200 cleaning jobs per year. For every job under 200, Joe will lose $80. Joe estimates that the average sales in Milwaukee are 350 jobs per year, with a standard deviation of 150 jobs. A marketing research team has approached Joe with a proposition to perform a marketing study on the potential for his cleaning business in Milwaukee. What is the most that Joe would be willing to pay for the marketing research?

•• M3-14 Diane Kennedy is contemplating the possibility of going into competition with Primary Pumps, a manufacturer of industrial water pumps. Diane has gathered some interesting information from a friend of hers who works for Primary. Diane has been told that the mean sales for Primary are 5,000 units and the standard deviation is 50 units. The opportunity loss per pump is $100. Furthermore, Diane has been told that the most that Primary is willing to spend for marketing research for the demand potential for pumps is $500. Diane is interested in knowing the breakeven point for Primary Pumps. Given this information, compute the break-even point.

•• M3-15 Jack Fuller estimates that the break-even point for EM 5, a standard electrical motor, is • 500 motors. For any motor that is not sold, there is an opportunity loss of $15. The average sales have been 700 motors, and 20% of the time sales have been between 650 and 750 motors. Jack has just been approached by Radner Research, a firm that specializes in performing marketing studies for industrial products, to perform a standard marketing study. What is the most that Jack would be willing to pay for the marketing research?

•• M3-16 Jack Fuller believes that he has made a mistake in his sales figures for EM 5 (see Problem M3-15 for details). He believes that the average sales are 750 instead of 700 units. Furthermore, he estimates that 20% of the time, sales will be between 700 and 800 units. What effect will these changes have on your estimate of the amount that Jack should be willing to pay for the marketing research?

Bibliography Drenzer and Wesolowsky. “The Expected Value of Perfect Information in Facility Location,” Operations Research (March–April 1980): 395. Hammond, J. S., R. L. Kenney, and H. Raiffa. “The Hidden Traps in Decision Making,” Harvard

Business Review (September–October 1998): 47–60. Keaton, M. “A New Functional Approximation to the Standard Normal Loss Integral,” Inventory Management Journal (Second Quarter 1994): 58.

Appendix M3.1: Derivation of the Break-Even Point

● APPENDIX M3.1: DERIVATION OF THE BREAK-EVEN POINT 1. Total costs  fixed cost  (variable cost/unit)  (number of units) 2. Total revenues  (price/unit)(number of units) 3. At break-even point, total costs  total revenues 4. Or, fixed cost  (variable cost/unit)  (number of units)  (price/unit)(number of units) 5. Solving for the number of units at the break-even point, we get break-even point (units) 

fixed cost price/unit  variable cost/unit

This equation is the same as Equation M3-1.

M3-13

M O D U L E

4

Material Requirements Planning and Just-inTime Inventory

LEARNING OBJECTIVES After completing this module, you will be able to: 1. Describe the use of material requirements planning (MRP) in solving dependent-demand inventory problems.

2. Discuss just-in-time (JIT) inventory concepts to reduce inventory levels and costs.

MODULE OUTLINE M4.1 Introduction M4.2 Dependent Demand: The Case for Material Requirements Planning

M4.3 Just-in-Time Inventory Control Summary • Glossary • Solved Problems • Self-Test • Discussion Questions and Problems • Internet Case Study • Bibliography

M4-1

M4-2

Module 4

MATERIAL REQUIREMENTS PLANNING

AND

J U S T - I N -T I M E I N V E N T O R Y

M4.1 INTRODUCTION ●

MRP is used for dependent demand situations.

In Chapter 6 we investigated a number of inventory problems in which the demand for one product was independent of the demand for other products. In this module we introduce an inventory model in which the demand for one product is dependent on the demand for other products. The approach used to solve this type of inventory problem is called material requirements planning (MRP). In addition, we explore how organizations can make manufacturing more efficient by reducing in-process inventory using a technique called just-in-time (JIT) inventory control.

M4.2 DEPENDENT DEMAND: THE CASE FOR MATERIAL ● REQUIREMENTS PLANNING In all the inventory models we discussed in Chapter 6, we assumed that the demand for one item was independent of the demand for other items. For example, the demand for refrigerators is usually independent of the demand for toaster ovens. Many inventory problems, however, are interrelated; the demand for one item is dependent on the demand for another item. Consider a manufacturer of small power lawn mowers. The demand for lawn mower wheels and spark plugs is dependent on the demand for lawn mowers. Four wheels and one spark plug are needed for each finished lawn mower. Usually when the demand for different items is dependent, the relationship between the items is known and constant. Thus, you should forecast the demand for the final products and compute the requirements for component parts. As with the inventory models discussed previously, the major questions that must be answered are how much to order and when to order. But with dependent demand, inventory scheduling and planning can be very complex indeed. In these situations, MRP can be employed effectively. Some of the benefits of MRP are 1. 2. 3. 4. 5. 6.

Increased customer service and satisfaction Reduced inventory costs Better inventory planning and scheduling Higher total sales Faster response to market changes and shifts Reduced inventory levels without reduced customer service

Although most MRP systems are computerized, the analysis is straightforward and similar from one computerized system to the next. Here is the typical procedure.

Material Structure Tree

Parents and components are identified in the material structure tree.

Step 1 is to develop a material structure tree. Let’s say that demand for product A is 50 units. Each unit of A requires 2 units of B and 3 units of C. Now, each unit of B requires 2 units of D and 3 units of E. Furthermore, each unit of C requires 1 unit of E and 2 units of F. Thus, the demand for B, C, D, E, and F is completely dependent on the demand for A. Given this information, a material structure tree can be developed for the related inventory items (see Figure M4.1). The structure tree has three levels: 0, 1, and 2. Items above any level are called parents, and items below any level are called components. There are three parents: A, B, and C. Each parent item has at least one level below it. Items B, C, D, E, and F are compo-

M4.2 Dependent Demand: The Case for Material Requirements Planning

FIGURE M4.1 Material Structure Tree for Item A

Material Structure Tree for Item A A

Level 0

M4-3

1

B(2)

2

D(2)

C(3)

E(3)

E(1)

F(2)

nents because each item has at least one level above it. In this structure tree, B and C are both parents and components. Note that the number in the parentheses in Figure M4.1 indicates how many units of that particular item are needed to make the item immediately above it. Thus B(2) means that it takes 2 units of B for every unit of A, and F(2) means that it takes 2 units of F for every unit of C. After the material structure tree has been developed, the number of units of each item required to satisfy demand can be determined. This information can be displayed as follows: Part B:

2  number of A’s  2  50  100

Part C:

3  number of A’s  3  50  150

Part D:

2  number of B’s  2  100  200

Part E:

3  number of B’s  1  number of C’s  3  100  1  150  450

Part F:

2  number of C’s  2  150  300

Thus, for 50 units of A we need 100 units of B, 150 units of C, 200 units of D, 450 units of E, and 300 units of F. Of course, the numbers in this table could have been determined directly from the material structure tree by multiplying the numbers along the branches times the demand for A, which is 50 units for this problem. For example, the number of units of D needed is simply 2  2  50  200 units.

Gross and Net Material Requirements Plan The next step is to construct a gross material requirements plan. This is a time schedule that shows when an item must be ordered from suppliers when there is no inventory on hand, or when the production of an item must be started in order to satisfy the demand for the finished product at a particular date. Let’s assume that all of the items are produced or manufactured by the same company. It takes one week to make A, two weeks to make B,

The material structure tree shows how many units are needed at every level of production.

M4-4

Module 4

MATERIAL REQUIREMENTS PLANNING

FIGURE M4.2 Gross Material Requirements Plan for 50 Units of A

AND

J U S T - I N -T I M E I N V E N T O R Y

Week 1

2

3

4

5

50

Required Date A

Order Release

50

Required Date

100

B

100

Order Release

150

Required Date C 150

Order Release

200

Required Date 200

Order Release

300

Required Date

150

E 300

Order Release

Using on-hand inventory to compute net requirements.

150

300

Required Date F 300

Lead Time  1 Week

Lead Time  2 Weeks

Lead Time  1 Week

Lead Time  1 Week

D

Order Release

6

Lead Time  2 Weeks

Lead Time  3 Weeks

one week to make C, one week to make D, two weeks to make E, and three weeks to make F. With this information, the gross material requirements plan can be constructed to reveal the production schedule needed to satisfy the demand of 50 units of A at a future date. Refer to Figure M4.2. The interpretation of the material in Figure M4.2 is as follows: If you want 50 units of A at week 6, you must start the manufacturing process in week 5. Thus, in week 5 you need 100 units of B and 150 units of C. These two items take 2 weeks and 1 week to produce. (See the lead times.) Production of B should be started in week 3, and C should be started in week 4. (See the order release for these items.) Working backward, the same computations can be made for all the other items. The material requirements plan graphically reveals when each item should be started and completed in order to have 50 units of A at week 6. Now, a net requirements plan can be developed given the on-hand inventory in Table M4.1; here is how it is done. Using these data, we can develop a net material requirements plan that includes gross requirements, on-hand inventory, net requirements, planned-order receipts, and plannedorder releases for each item. It is developed by beginning with A and working backward through the other items. Figure M4.3 shows a net material requirements plan for product A. The net requirements plan is constructed like the gross requirements plan. Starting with item A, we work backward determining net requirements for all items. These computations are done by referring constantly to the structure tree and lead times. The gross requirements for A are 50 units in week 6. Ten items are on hand, and thus the net requirements and planned-order receipt are both 40 items in week 6. Because of the oneweek lead time, the planned-order release is 40 items in week 5. (See the arrow

M4.2 Dependent Demand: The Case for Material Requirements Planning

TABLE M4.1

On-Hand Inventory

ITEM

ON-HAND INVENTORY

A

10

B

15

C

20

D

10

E

10

F

5

Week Item A

B

C

D

E

F

1

2

3

4

5

50 10 40 40

Gross On-Hand 10 Net Order Receipt Order Release

80 15 65 65

A

2

65 120 20 100 100

Gross On-Hand 20 Net Order Receipt Order Release

A

1

100

Gross On-Hand 10 Net Order Receipt Order Release

Gross On-Hand 5 Net Order Receipt Order Release

1

40

Gross On-Hand 15 Net Order Receipt Order Release

Gross On-Hand 10 Net Order Receipt Order Release

6

Lead Time

130 10 120 120

B

1

120 195 10 185 185 185

B

100 0 100 100

2

100 200 5 195 195

195

C

C

3

M4-5

FIGURE M4.3 Net Material Requirements Plan for 50 units of A.

M4-6

Module 4

MATERIAL REQUIREMENTS PLANNING

AND

J U S T - I N -T I M E I N V E N T O R Y

connecting the order receipt and order release.) Look down column 5 and refer to the structure tree in Figure M4.1. Eighty (2  40) items of B and 120  3  40 items of C are required in week 5 in order to have a total of 50 items of A in week 6. The letter A in the upper-right corner for items B and C means that this demand for B and C was generated as a result of the demand for the parent, A. Now the same type of analysis is done for B and C to determine the net requirements for D, E, and F.

Two or More End Products So far, we have considered only one end product. For most manufacturing companies, there are normally two or more end products that use some of the same parts or components. All of the end products must be incorporated into a single net material requirements plan. In the MRP example just discussed, we developed a net material requirements plan for product A. Now, we’ll show how to modify the net material requirements plan when a second end product is introduced. The second end product will be called AA. The material structure tree for product AA is shown below. Let’s assume that we need 10 units of AA. With this information we can compute the gross requirements for AA: Part D:

3  number of AA’s  3  10  30

Part F:

2  number of AA’s  2  10  20 AA

D(3)

F(2)

To develop a net material requirements plan, we need to know the lead time for AA. Let’s assume that it is one week. We also assume that we need 10 units of AA in week 6 and that we have no units of AA on hand.

IN ACTION

MRP Builds Profits at Compaq

Cal Monteith, Compaq’s manager of master planning and production control in Houston, was in the process of phasing out one of Compaq’s personal computer models when he was told that Compaq had underestimated demand. The new schedule suggested that he build 10,000 more PCs. Could he do it? Questions Monteith faced were: What parts were on hand and on order? What labor was available? Could the plant handle the capacity? Did vendors have the capacity? What product lines could be rescheduled? Traditionally, amassing such information required not only MRP reports but a variety of additional reports. Even then a response was based on partial information.

New software, including a combination of spreadsheets, inquiry languages, and report writers, allowed Monteith to search huge databases, isolate the relevant data (customer orders, forecasts, inventory, and capacity), and do some quick calculations. One such piece of software is FastMRP, which is based in Ottawa, Canada. Another is Carp Systems International of Kanata, Ontario. The result: Compaq was able to make schedule adjustments that added millions of dollars to the bottom line. Source: New York Times (October 18, 1992): F9; Carp System International and FastMRP.

M4.2 Dependent Demand: The Case for Material Requirements Planning

Week Item

Inventory

AA

Gross On-Hand: 0 Net Order Receipt Order Release

A

B

C

D

E

F

1

2

3

4

5

1 Week

50 10 40 40

1 Week

40

Gross On-Hand: 15 Net Order Receipt Order Release

80 15 65 65

A

2 Weeks

65 120 20 100 100

Gross On-Hand: 20 Net Order Receipt Order Release

A

1 Week

100 130 10 120 120

Gross On-Hand: 10 Net Order Receipt Order Release

Gross On-Hand: 5 Net Order Receipt Order Release

10 0 10 10 10

Gross On-Hand: 10 Net Order Receipt Order Release

Gross On-Hand: 10 Net Order Receipt Order Release

6

Lead Time

B

120 B

100 0 100 100

1 Week

C

2 Weeks

100 200 5 195 195

195

AA

30 195 10 185 185

185

30 0 30 30

C

20 0 20 20

AA

3 Weeks

20

Now, we are in a position to modify the net material requirements plan for product A to include AA. This is done in Figure M4.4. Look at the top row of the figure. As you can see, we have a gross requirement of 10 units of AA in week 6. We don’t have any units of AA on hand, so the net requirement is also 10 units of AA. Because it takes one week to make AA, the order release of 10 units of AA is in week 5. This means that we start making AA in week 5 and have the finished units in week 6. Because we start making AA in week 5, we must have 30 units of D and 20 units of F in week 5. See the rows for D and F in Figure M4.4. The lead time for D is one week. Thus, we must give the order release in week 4 to have the finished units of D in week 5. Note that there was no inventory on hand for D in week 5. The original 10 units of inventory of D were used in week 5 to make B, which was subsequently used to make A. We

M4-7

FIGURE M4.4 Net Material Requirements Plan, Including AA

M4-8

Module 4

MATERIAL REQUIREMENTS PLANNING

AND

J U S T - I N -T I M E I N V E N T O R Y

also need to have 20 units of F in week 5 to produce 10 units of AA by week 6. Again, we have no on-hand inventory of F in week 5. The original 5 units were used in week 4 to make C, which was subsequently used to make A. The lead time for F is three weeks. Thus, the order release for 20 units of F must be in week 2. See the F row in Figure M4.4. This example shows how the inventory requirements of two products can be reflected in the same net material requirements plan. Some manufacturing companies can have more than 100 end products that must be coordinated in the same net material requirements plan. Although such a situation can be very complicated, the same principles we used in this example are employed. Remember that computer programs have been developed to handle large and complex manufacturing operations. In addition to using MRP to handle end products and finished goods, MRP can also be used to handle spare parts and components. This is important because most manufacturing companies sell spare parts and components for maintenance. A net material requirements plan should also reflect these spare parts and components.

M4.3 JUST-IN-TIME INVENTORY CONTROL ●

With JIT, inventory arrives just before it is needed.

During the past two decades, there has been a trend to make the manufacturing process more efficient. One objective is to have less in-process inventory on hand. This is known as just-in-time (JIT) inventory. With this approach, inventory arrives just in time to be used during the manufacturing process to produce subparts, assemblies, or finished goods. One technique of implementing JIT is a manual procedure called Kanban. Kanban in Japanese means “card.” With a dual-card Kanban system, there is a conveyance Kanban, or C-Kanban, and a production Kanban, or P-Kanban. The Kanban system is very simple. Here is how it works:

Four Steps of Kanban 1. A user takes a container of parts or inventory along with its accompanying C-Kanban to his or her work area. When there are no more parts or the container is empty, the user returns the empty container along with the C-Kanban to the producer area. 2. At the producer area, there is a full container of parts along with a P-Kanban. The user detaches the P-Kanban from the full container of parts. Then the user takes the full container of parts along with the original C-Kanban back to his or her area for immediate use. 3. The detached P-Kanban goes back to the producer area along with the empty container. The P-Kanban is a signal that new parts are to be manufactured or that new parts are to be placed into the container. When the container is filled, the P-Kanban is attached to the container. 4. This process repeats itself during the typical workday. The dual-card Kanban system is shown in Figure M4.5.

As seen in Figure M4.5, full containers along with their C-Kanban go from the storage area to a user area, typically on a manufacturing line. During the production process, parts in the container are used up. When the container is empty, the empty container along with the same C-Kanban goes back to the storage area. Here, the user picks up a new full container. The P-Kanban from the full container is removed and sent back to the production area along with the empty container to be refilled.

M4.3 Just-in-Time Inventory Control

P = Kanban and Container

FIGURE M4.5 The Kanban System

C = Kanban and Container

4

M4-9

1

Producer Area

Storage Area 3

User Area 2

At a minimum, two containers are required using the Kanban system. One container is used at the user area, while another container is being refilled for future use. In reality, there are usually more than two containers. This is how inventory control is accomplished. Inventory managers can introduce additional containers and their associated PKanbans into the system. In a similar fashion, the inventory manager can remove containers and the P-Kanbans to have tighter control over inventory buildups. In addition to being a simple, easy-to-implement system, the Kanban system can also be very effective in controlling inventory costs and in uncovering production bottlenecks. Inventory arrives at the user area or on the manufacturing line just when it is needed. Inventory does not build up unnecessarily, cluttering the production line or adding to unnecessary inventory expense. The Kanban system reduces inventory levels and makes for a more effective operation. It is like putting the production line on an inventory diet. Like any diet, the inventory diet imposed by the Kanban system makes the production operation more streamlined. Furthermore, production bottlenecks and problems can be uncovered. Many production managers remove containers and their associated P-Kanban from the Kanban system in order to starve the production line to uncover bottlenecks and potential problems. In implementing a Kanban system, a number of work rules or Kanban rules are normally implemented. One typical Kanban rule is that no containers are filled without the appropriate P-Kanban. Another rule is that each container must hold exactly the specified number of parts or inventory items. These and similar rules make the production process more efficient. Only those parts that are actually needed are produced. The production department does not produce inventory just to keep busy. It produces inventory or parts only when they are needed in the user area or on an actual manufacturing line.

IN ACTION

MRP and JIT at Welpac, Westair, and Rio Bravo Electronics

Peter Antonioni, the purchasing manager for Welpac, a large hardware company, packages thousands of general hardware items for builders and other customers. About 10,000 raw materials are placed in about 5,000 end products. The company uses MRP to help it with its purchasing effort. The results have been better inventory control and lower costs. As a result of the use of MRP, the British firm Westair has been able to increase inventory turnover by 60%. In addition, MRP has helped the company reduce inventory lead times from four to six weeks to days. Rio Bravo Electronics, located in Juarez, Mexico, uses JIT to make sure that products are shipped on time. Every two

hours or so, the supply of parts on the floor is replenished from the main materials storage area using a JIT delivery system. The company is involved with the “maquiladora concept,” which allows Rio Bravo to receive materials in Mexico duty free and then ship the finished components back to the United States. Rio Bravo assembles wiring harnesses for the U.S. company Delphi Packard Electric Systems.

Source: Holder Roy, Works Management 48, 3 (March 1995): 18–21; Works Management 48, 3 (March 1995): 7; and Jeffrey L. Funk, International Journal of Operations & Production Management 15, 5 (1995): 60–71.

M4-10

Module 4

MATERIAL REQUIREMENTS PLANNING

AND

J U S T - I N -T I M E I N V E N T O R Y

Summary The demand for inventory is not always independent. When demand is dependent, a technique such as MRP is needed. MRP can be used to determine the gross and net material requirements for products. This module also investigated the use of JIT inventory. JIT can lower inventory levels, reduce costs, and make a manufacturing process more efficient. Kanban, a Japanese word meaning card, is one way to implement the JIT approach.

Glossary Material Requirements Planning. An inventory model that can handle dependent demand. Just-in-Time (JIT) Inventory. An approach whereby inventory arrives just in time to be used in the manufacturing process. Kanban. A manual JIT system developed by the Japanese. Kanban means “card” in Japanese.

Solved Problems Solved Problem M4-1 Because of a technological breakthrough in the product illustrated in Figure M4.1, it now takes only 2 units of item C to make 1 unit of item A. How does this technological breakthrough change the material structure tree and the quantities and types of materials required to make 50 units of item A? Assume that there is no on-hand inventory. Solution The technological breakthrough changes the structure tree we saw in Figure M4.1 and changes what materials and quantities are required to make 50 units of item A. These changes are shown in the following figure. Material Structure Tree for Item A A

level 0

1

B (2)

2

D(2)

C (2)

E(3)

Part B:

2  number of A’s  2  50  100

Part C:

2  number of A’s  2  50  100

Part D:

2  number of B’s  2  100  200

E(1)

F(2)

Solved Problems

Part E:

3  number of B’s  1  number of C’s  3  100  1  100  400

Part F:

2  number of C’s  2  100  200

Solved Problem M4-2 As you saw in Table M4.1, the on-hand inventory for item C is currently 20 units and the onhand inventory for item D is currently 10 units. How would the gross and net material requirements plan of Figures M4.1 and M4.2 change if the on-hand inventory was 10 units for item C and 5 units for item D? Solution On-hand inventory does not have any impact on the gross material requirements plan. While onhand inventory does not change the gross material requirements plan, it does have an impact on the net material requirements plan. The results are shown in the following table: WEEK ITEM

1

2

3

4

5

A Gross

6

LEAD TIME

50

1

10

Net

40

Order Receipt

40 a

On-Hand 10

Order Release

40 80A

B Gross

15

Net

65

Order Receipt

65

Order Release

a

On-Hand 15

65

120A

C Gross On-Hand 10

1

10 110

Order Receipt

110

Order Release

110 B

D Gross

130

On-Hand 10

5 125

Order Receipt

125 125

a

Net

a

Net

Order Release

2

1

M4-11

MATERIAL REQUIREMENTS PLANNING

AND

J U S T - I N -T I M E I N V E N T O R Y

WEEK ITEM

1

2

3

4

195B

110C

10

0

Net

185

110

Order Receipt

185

110

E Gross

Order Release

185

110

5

220C

F Gross On-Hand 5

5

Net

215

Order Receipt

215

Order Release

215

6

LEAD TIME 2

a

On-Hand 10

a

Module 4

a

M4-12

3

Self-Test

M4-13

SELF-TEST • • •

Before taking the self-test, refer back to the learning objectives at the beginning of the module and the glossary at the end of the module. Use the key at the back of the book to correct your answers. Restudy pages that correspond to any questions that you answered incorrectly or material you feel uncertain about.

1. Which of the following is not an advantage of MRP? a. increased customer service b. reduced inventory costs c. faster response to market changes d. reduced inventory levels e. all of the above are advantages of MRP 2. In MRP, what is used to show which items and their quantities are needed to make a finished product? a. order file b. material structure tree c. Kanban file d. ABC file e. order release 3. Which of the following is not included in the gross material requirements plan? a. required date b. order release c. on-hand inventory d. lead time e. all of the above are included in the gross materials plan

4. Which of the following is not included in the net materials plan? a. required date b. order release c. on-hand delivery d. lead time e. all of the above are included in the gross materials plan 5. Kanban means a. productive rice field. b. card. c. just-in-time. d. container. e. productivity. 6. With ________________ , inventory arrives when it is needed in the manufacturing process. 7. On-hand inventory is needed to construct the ________________ plan. 8. MRP is used when the demand for one item is ________________ on the demand for other items.

M4-14

Module 4

MATERIAL REQUIREMENTS PLANNING

AND

J U S T - I N -T I M E I N V E N T O R Y

Discussion Questions and Problems Discussion Questions M4-1

What is the overall purpose of MRP?

M4-2

How is a structure tree used in MRP?

M4-3

What is the difference between the gross and net material requirements plan?

M4-4

What is the objective of JIT?

M4-5

What does Kanban mean, and how does it work?

Problems •M4-6 This supplement presented a material structure tree for item A in Figure M4-1. Assume that it now takes 1 unit of item B to make every unit of item A. What impact does this have on the material structure tree and the number of items of D and E that are needed?

••M4-7

Given the information in the Problem M4-6, develop a gross material requirements plan for 50 units of item A.

••M4-8

Using the data from Figures M4-1 through M4-3, develop a net material requirements plan for 50 units of item A assuming that it only takes 1 unit of item B for each unit of item A.

••M4-9 •

The demand for product S is 100 units. Each unit of S requires 1 unit of T and Z\x unit of U. Each unit of T requires 1 unit of V, 2 units of W, and 1 unit of X. Finally, each unit of U requires Z\x unit of Y and 3 units of Z. All items are manufactured by the same firm. It takes two weeks to make S, one week to make T, two weeks to make U, two weeks to make V, three weeks to make W, one week to make X, two weeks to make Y, and one week to make Z. (a) Construct a material structure tree and a gross material requirements plan for the dependent inventory items. (b) Identify all levels, parents, and components. (c) Construct a net material requirements plan using the following on-hand inventory data: ITEM

ON-HAND INVENTORY

S

20

T

20

U

10

V

30

W

30

X

25

Y

15

Z

10

INTERNET CASE STUDY See our Internet home page at http://www.prenhall.com/render for this MRP case study: Service, Inc.

Bibliography

Bibliography Allnoch, Allen. “Manufacturing Software Plays Key Role,” IE Solutions (November 1997): 1085. Brucker, H. D., G. A. Flowers, and R. D. Peck. “MRP Shop-Floor Control in a Job Shop: Definitely Works,” Production and Inventory Management Journal 33, 2 (Second Quarter 1992): 43. Ding, F., and M. Yuen. “A Modified MRP for a Production System with the Coexistence of MRP and Kanbans,” Journal of Operations Management 10, 2 (April 1991): 267–277. “Fully Automated System Achieves True JIT,” Modern Materials Handling (April 1998): 122. Funk, Jeffrey L. “Just-in-Time Manufacturing and Logistical Complexity: A Contingency Model,”

International Journal of Operations & Production Management 15, 5 (1995): 60–71. Holder, Roy. “MRP Helps Welpac Win through Effective Purchasing,” Works Management 48, 3 (March 1995): 18–21. Imai, Masakki. “Will America’s Corporate Theme Song be ‘Just-In-Time’?” Journal for Quality & Participation (March 1998): 26. Jacobs, F. R., and D. C. Whybark. “A Comparison of Reorder Point and Material Requirements Planning Inventory Control Logic,” Decision Sciences 23, 2 (March–April 1992): 332. Penlesky, R. J., et al. “Open Order Due Date Maintenance in MRP Systems,” Management Science 35 (May 1989): 571–584.

M4-15

M O D U L E

5

Mathematical Tools: Determinants and Matrices

LEARNING OBJECTIVES After completing this module, students will be able to: 1. Understand how matrices and determinants are used as mathematical tools in QA.

2. 3. 4. 5.

Compute the value of a determinant. Solve simultaneous equations with determinants. Add, subtract, multiply, and divide matrices. Transpose and find the inverse of matrices.

MODULE OUTLINE M5.1 Introduction M5.2 Determinants M5.3 Matrices Summary • Glossary • Self-Test • Problems • Bibliography

M5-1

MATHEMATICAL TOOLS: DETERMINANTS

AND

MATRICES

M5.1 INTRODUCTION ● Two new mathematical concepts, determinants and matrices, are introduced in this module. These tools are especially useful in Chapter 16 and the Supplement to Chapter 1, which deal with Markov analysis and game theory, but they are also handy computational aids for many other quantitative analysis problems, including linear programming, the topic of Chapters 7, 8, and 9.

M5.2 DETERMINANTS ● A determinant is simply a square array of numbers arranged in rows and columns. Every determinant has a unique numerical value for which we can solve. As a mathematical tool, determinants are of value in helping to solve a series of simultaneous equations. A 2-row by 2-column (2  2) determinant will have the following form, where a, b, c, and d are numbers:

  a b c d

Similarly, a 3  3 determinant has nine entries:

  a d g

b c e f h i

One common procedure for finding the numerical value of a 2  2 or 3  3 determinant is to draw its primary and secondary diagonals. In the case of a 2  2 determinant, the value is found by multiplying the numbers on the primary diagonal and subtracting from that product the product of the numbers on the secondary diagonal: value  (a)(d)  (c)(b) Secondary diagonal

a

  a b c d

a

a

Primary diagonal

For a 3  3 determinant, we redraw the first two columns to help visualize all diagonals and follow a similar procedure.

a

a

b e h

a

b c a e f d h i g a

Value 



a d g

a

Primary diagonal

a

Module 5

a a aa a a

M5-2





Secondary diagonal



1st primary diagonal product (aei)  2nd primary diagonal product (bfg)  3rd primary diagonal product (cdh)





1st secondary diagonal product (gec)   2nd secondary diagonal product (hfa)  3rd secondary diagonal product (idb)  aei  bfg  cdh  gec  hfa  idb

M5.2 Determinants

Let’s use this approach to find the numerical values of the following 2  2 and 3  3 determinants:

 

(a)

 



3 (b) 2 4

2 5 1 8

2 5 1 8 a

a

a

a

2 1 1



2 1 1

  3 2 4

1 5 2 a

1 5 2

a



3 (b) 2 4

1 5 2

Value  (2)(8)  (1)(5)  11

a

(a)

Value  (3) (5) (1)  (1) (1) (4)  (2) (2) (2)  (4) (5) (2)  (2) (1) (3)  (1) (2) (1)   15  4  8  40  6  2  51 A set of simultaneous equations may be solved through the use of determinants by setting up a ratio of two special determinants for each unknown variable. This fairly easy procedure is best illustrated with an example. Given the three simultaneous equations, 2X  3Y  1Z  10 4X  1Y  2Z  8 5X  2Y  3Z  6 we may structure determinants to help solve for unknown quantities X, Y, and Z.

3 1 2

a

a

1 2 3

Y

 

a

Denominator determinant, in which coefficients of all unknown variables are listed (all columns to the left of the equal sign) Coefficients for Z Coefficients for Y Coefficients for X

a

2 4 5

 

2 4 5

10 8 6

1 2 3

2 4 5

3 1 2

1 2 3

a

1 2 3

Numerator determinant, in which column with Ys is replaced by right-hand-side numbers

a

a

a

3 1 2

a

 

a

X

 

10 8 6

Coefficients for right-hand side Coefficients for Y Coefficients for Z Numerator determinant, in which column with Xs is replaced by column of numbers to the right-hand side of the equal sign

Denominator determinant stays the same regardless of which variable we are solving for

M5-3

MATHEMATICAL TOOLS: DETERMINANTS

Z

 

 

2 4 5

3 1 2

10 8 6

2 4 5

3 1 2

1 2 3

AND

MATRICES

a

Module 5

Numerator determinant, in which column with Zs is replaced by right-hand-side numbers

a

M5-4

Denominator determinant, again the same as when solving for X and Y

Determining the values of X, Y, and Z now involves finding the numerical values of the four separate determinants using the method shown earlier in this supplement. X 

numerical value of numerator determinant numerical value of denominator determinant 128  3.88 33

Y

20  0.61 33

Z

134  4.06 33

To verify that X  3.88, Y  0.61, and Z  4.06, we may choose any one of the original three simultaneous equations and insert these numbers. For example, 2X  3Y  1Z  10 2(3.88)  3(0.61)  1(4.06)  7.76  1.83  4.06  10

M5.3 MATRICES ● A matrix, like a determinant, can also be defined as an array of numbers arranged in rows and columns. Matrices, which are usually enclosed in parentheses or brackets, have no numerical value as do determinants but are used as an effective means of presenting or summarizing business data. The following 2-row by 3-column (2  3) matrix, for example, might be used by television station executives to describe the channel switching behavior of their 5 o’clock TV news audience. AUDIENCE SWITCHING PROBABILITIES, NEXT MONTH’S ACTIVITY CHANNEL 6

CHANNEL 8

STOP VIEWING

Channel 6

0.80

0.15

0.05

Channel 8

0.20

0.70

0.10

a

CURRENT STATION

2  3 matrix

The number in the first row and first column indicates that there is a 0.80 probability that someone currently watching the Channel 6 news will continue to do so next month. Similarly, 15% of Channel 6’s viewers are expected to switch to Channel 8 next month

M5.3 Matrices

M5-5

(row 1, column 2), 5% will not be watching the 5 o’clock news at all (row 1, column 3), and so on for the second row. The remainder of this module deals with the numerous mathematical operations that can be performed on matrices. These include matrix addition, subtraction and multiplication, transposing a matrix, finding its cofactors and adjoint, and matrix inversion.

Matrix Addition and Subtraction Matrix addition and subtraction are the easiest operations. Matrices of the same dimensions, that is, the same number of rows and columns, can be added or subtracted by adding or subtracting the numbers in the same row and column of each matrix. Here are two small matrices:

52 71 3 6 matrix B   3 8 matrix A 

Adding and subtracting numbers.

To find the sum of these 2  2 matrices, we add corresponding elements to create a new matrix. matrix C  matrix A  matrix B 

52 71   33 68   85 139  a





To subtract matrix B from matrix A, we simply subtract the corresponding elements in each position. matrix C  matrix A  matrix B   

a

52 71   33 68   12

1 7



Matrix Multiplication Matrix multiplication is an operation that may take place only if the number of columns in the first matrix equals the number of rows in the second matrix. Thus, matrices of the dimensions in the following table may be multiplied: MATRIX B SIZE

SIZE OF A  B RESULTING

33

33

33

31

13

33

31

11

31

24

43

23

69

92

62

83

36

86

MATRIX A SIZE

M5-6

Module 5

MATHEMATICAL TOOLS: DETERMINANTS

MATRICES

AND

We also note, in the far right column in the table, that the outer two numbers in the matrix sizes determine the dimensions of the new matrix. That is, if an 8-row by 3-column matrix is multiplied by a 3-row by 6-column matrix, the resultant product will be an 8-row by 6column matrix. Matrices of the dimensions in the following table may not be multiplied:

Multiplying numbers.

MATRIX A SIZE

MATRIX B SIZE

34

33

12

12

69

89

22

33

To perform the multiplication process, we take each row of the first matrix and multiply its elements times the numbers in each column of the second matrix. Hence the number in the first row and first column of the new matrix is derived from the product of the first row of the first matrix times the first column of the second matrix. Similarly, the number in the first row and second column of the new matrix is the product of the first row of the first matrix times the second column of the second matrix. This concept is not nearly as confusing as it may sound. Let us begin by computing the value of matrix C, which is the product of matrix A times matrix B:

matrix A 

 5 2 3

matrix B  (4

6)

This is a legitimate task since matrix A is 3  1 and matrix B is 1  2. The product, matrix C, will have 3 rows and 2 columns (3  2). Symbolically, the operation is matrix A  matrix B  matrix C:

 a b c

 (d

e) 

  ad bd cd

ae be ce

(M5-1)

Using the actual numbers, we have

 5 2 3

 (4 6) 

  20 8 12

30 12 18

 matrix C

As a second example, let matrix R be (6 2 5) and matrix S be

 3 1 2

M5.3 Matrices

Then the product, matrix T  matrix R  matrix S, will be of dimension 1  1 because we are multiplying a 1  3 matrix by a 3  1 matrix: matrix R  matrix S  matrix T (1  3)

(3  1)

(1  1)

(a b c) 



 (ad  be  cf)

(6 2 5) 



 ((6)(3)  (2)(1)  (5)(2))  (30)

d e f 3 1 2

To multiply larger matrices, we combine the approaches of the preceding examples: matrix U 

67 21

matrix V 

matrix U  matrix V  (2  2)  (2  2)

35 48

matrix Y (2  2)

ac bd  ge hf   aece  bgdg

af  bh cf  dh

67 21  35 48  1821  105

24  16 28  8





(M5-2)



2826 4036

To introduce a special type of matrix, called the identity matrix, let’s try a final multiplication example: matrix H 

42 73

matrix I 

10 01

matrix H  matrix I  matrix J

42 73  10 01  42  00 



07 03

42 73

Matrix I is called an identity matrix. An identity matrix has 1s on its diagonal and 0s in all other positions. When multiplied by any matrix of the same square dimensions, it yields the original matrix. So in this case, matrix J  matrix H. Matrix multiplication can also be useful in performing business computations. Blank Plumbing and Heating is about to bid on three contract jobs: to install plumbing fixtures in a new university dormitory, an office building, and an apartment complex.

The Identity matrix.

M5-7

M5-8

Module 5

MATHEMATICAL TOOLS: DETERMINANTS

AND

MATRICES

The number of toilets, sinks, and bathtubs needed at each project is summarized in matrix notation as follows. The cost per plumbing fixture is also given. Matrix multiplication may be used to provide an estimate of total cost of fixtures at each job. PROJECT

DEMAND

COST/UNIT

Toilets

Sinks

Bathtubs

Dormitory

5

10

2

Toilet

$40

Office

20

20

0

Sink

$25

Apartments

15

30

15

Bathtub

$50

Job demand matrix  fixture cost matrix  job cost matrix



5 20 15

(3  3)

(3  1)

10 20 30

   

2 0 15



(3  1)

  

$200  250  100  $800  500  0 $600  750  750

$40 $25 $50



$550 $1,300 $2,100

Hence Blank Plumbing can expect to spend $550 on fixtures at the dormitory project, $1,300 at the office building, and $2,100 at the apartment complex.

Matrix Transpose The transpose of a matrix is a means of presenting data in different form. To create the transpose of a given matrix, we simply interchange the rows with the columns. Hence, the first row of a matrix becomes its first column, the second row becomes the second column, and so on. Two matrices are transposed here:

matrix A 

transpose of matrix A 

matrix B 

transpose of matrix B 



2 0 4

6 9 8

5 2 6



3 0 9

1 4 8

28

7 5

0 6

5 3 1

 2 7 0 3

8 5 6 4

  

3 4

M5.3 Matrices

Matrix of Cofactors and Adjoint Two more useful concepts in the mathematics of matrices are the matrix of cofactors and the adjoint of a matrix. A cofactor is defined as the set of numbers that remains after a given row and column have been taken out of a matrix. An adjoint is simply the transpose of the matrix of cofactors. The real value of the two concepts lies in their usefulness in forming the inverse of a matrix—something that we investigate in the next section. To compute the matrix of cofactors for a particular matrix, we follow 6 steps:

Six Steps in Computing a Matrix of Cofactors 1. Select an element in the original matrix. 2. Draw a line through the row and column of the element selected. The numbers uncovered represent the cofactor for that element. 3. Calculate the value of the determinant of the cofactor. 4. Add together the location numbers of the row and column crossed out in step 2. If the sum is even, the sign of the determinant’s value (from step 3) does not change. If the sum is an odd number, change the sign of the determinant’s value. 5. The number just computed becomes an entry in the matrix of cofactors; it is located in the same position as the element selected in step 1. 6. Return to step 1 and continue until all elements in the original matrix have been replaced by their cofactor values.

Let’s compute the matrix of cofactors, and then the adjoint, for the following matrix, using Table M5.1 on the next page to help in the calculations:

original matrix 



7 0 1

matrix of cofactors 



3 51 21

4 4 1

2 25 14

adjoint of the matrix 



3 4 2

51 4 25

21 1 14

3 2 4



5 3 8

 

(from Table M5.1)

Finding the Inverse of a Matrix The inverse of a matrix is a unique matrix of the same dimensions which when multiplied by the original matrix produces a unit or identity matrix. For example, if A is any 2  2 matrix and its inverse is denoted A1, then A  A1 

10 01  identity matrix

(M5-3)

M5-9

Module 5

AND

MATRICES

Matrix of Cofactor Calculations

ELEMENT REMOVED Row 1, column 1

01 38

Row 1, column 2

24 38

Row 1, column 3

24 01

Row 2, column 1

71 58

Row 2, column 2

34 58

Row 2, column 3

34 71

Row 3, column 1

70 53

Row 3, column 2

32 53

Determining the value of the determinant.

        

32 70

        

VALUE OF COFACTOR

0 1

3  3 8

3 (sign not changed)

2 4

3  8

4

4 (sign changed)

2 4

0  1

2

7 1

5  8

51

3 4

5  8

4

3 4

7  25 1

25 (sign changed)

7 0

5  3

21 (sign not changed)

3 2

5  1 3

1 (sign changed)

3 2

7  14 0

14 (sign not changed)

2 (sign not changed)

51 (sign changed)

4 (sign not changed)

21

The adjoint of a matrix is extremely helpful in forming the inverse of the original matrix. We simply compute the value of the determinant of the original matrix and divide each term of the adjoint by this value. To find the inverse of the matrix just presented, we need to know the adjoint (already computed) and the value of the determinant of the original matrix:



3 2 4



7 0 1

5 3 8

 original matrix

Value of determinant:



3 7 2 0 4 1 a

5 3 8

a

7 0 1

a



3 2 4

a

Row 3, column 3

DETERMINANT OF COFACTORS

COFACTORS

a

TABLE M5.1

MATHEMATICAL TOOLS: DETERMINANTS

a

M5-10

value  0  84  10  0  9  112  27

Glossary

The inverse is found by dividing each element in the adjoint by 27: 3

51

21

4

4

⁄27

1

2

25

⁄27

14

⁄27

inverse 

⁄27

⁄27 ⁄27

3

51

4

4

⁄27

1

2

25

⁄27

14

⁄27



⁄27

⁄27 ⁄27

⁄27

21

⁄27 ⁄27

⁄27

⁄27

⁄27

We may verify that this is indeed the correct inverse of the original matrix by multiplying the original matrix times the inverse: original matrix



3 2 4

7 0 1



 

5 3 8



3

⁄27 51⁄27 4 ⁄27 4⁄27 2 ⁄27 25⁄27

identity matrix



inverse 21

⁄27 ⁄27 14 ⁄27

1

  

1 0 0

0 1 0



0 0 1

Summary This module contained a brief presentation of determinants and matrices, two mathematical tools often used in quantitative analysis. Determinants are useful in solving a series of simultaneous equations. Matrices are the basis for the simplex method of linear programming. The module’s discussion included matrix addition, subtraction, multiplication, transposition, cofactors, adjoints, and inverses.

Glossary Determinant. A square array of numbers arranged in rows and columns. Every determinant has a unique numerical value. Simultaneous Equations. A series of equations that must be solved at the same time. Matrix. An array of numbers that can be used to present or summarize business data. Identity Matrix. A square matrix with 1s on its diagonal and 0s in all other positions. Transpose. The interchange of rows and columns in a matrix. Matrix of Cofactors. The determinants of the numbers remaining in a matrix after a given row and column have been removed. Adjoint. The transpose of a matrix of cofactors. Inverse. A unique matrix that may be multiplied by the original matrix to create an identity matrix.

M5-11

M5-12

Module 5

MATHEMATICAL TOOLS: DETERMINANTS

AND

MATRICES

SELF-TEST • • •

Before taking the self-test, refer back to the learning objectives at the beginning of the module and the glossary at the end of the module. Use the key at the back of the book to correct your answers. Restudy pages that correspond to any questions that you answered incorrectly or material you feel uncertain about.

1. A determinant is ________________ . 2. The value of the determinant

  1 3

2 is 4

a. 2. b. 10. c. 2. d. 5. e. 14. 3. To find the value of a small determinant, you ________________ . 4. Matrices a. are usually enclosed in parentheses or brackets. b. can be defined as an array of numbers in rows and columns. c. have no numerical value. d. are an effective way of presenting or summarizing data. e. all of the above.

5. Matrix multiplication may take place only if the number of columns in the first matrix equals the __________ . 6. An identity matrix a. has 1s on its diagonal. b. has 0s in all positions not on a diagonal. c. can be multiplied by any matrix of the same dimensions. d. is square in size. e. all of the above. 7. To create the transpose of a matrix, you ________________ . 8. When the inverse of a matrix is multiplied by the original matrix, it produces a. the matrix of cofactors. b. the adjoint of the matrix. c. the transpose. d. the identity matrix. e. none of the above.

Problems

Problems •M5-1 Find the numerical values of the following determinants. (a)



6 5

3 2





3 (b) 1 4

7 1 3



6 2 2

••M5-2 Use determinants to solve the following set of simultaneous equations. 5X  2Y  3Z  4 2X  3Y  1Z  2 3X  1Y  2Z  3

•M5-3 Perform the following operations. (a) (b) (c) (d)

Add matrix A to matrix B. Subtract matrix A from matrix B. Add matrix C to matrix D. Add matrix C to matrix A.

matrix A 

matrix B 







matrix D 

5 2



6 1





1 7

matrix C 

4 8

2 3

7 0

3 7 9



6 8 2

9 1 4

1 0 1

6 6 5

4

5)

5 4 3



••M5-4 Perform the following matrix multiplications. (a) (b) (c) (d)

Matrix C  matrix A  matrix B Matrix G  matrix E  matrix F Matrix T  matrix R  matrix S Matrix Z  matrix W  matrix Y

matrix A 

21

matrix E  (5

matrix B  (3

2

6

1) matrix F 

 4 3 2 0

matrix R 

21 34

matrix S 

10 01

matrix W 

 

matrix Y 

12

3 2 4

5 1 4

4 3

5 6



1 5

M5-13

M5-14

Module 5

MATHEMATICAL TOOLS: DETERMINANTS

•• M5-5

AND

MATRICES

RLB Electrical Contracting, Inc., bids on the same three jobs as Blank Plumbing (Section M5.3). RLB must supply wiring, conduits, electrical wall fixtures, and lighting fixtures. The following are needed supplies and their costs per unit: DEMAND WIRING (ROLLS)

CONDUITS

WALL FIXTURES

LIGHTING FIXTURES

Dormitory

50

100

10

20

Office

70

80

20

30

Apartments

20

50

30

10

PROJECT

ITEM

COST/UNIT ($)

Wiring

1.00

Conduits

2.00

Wall fixtures

3.00

Lighting fixtures

5.00

Use matrix multiplication to compute the cost of materials at each job site.

• M5-6

••M5-7 •

Transpose matrices R and S.

matrix R 

 

matrix S 

  3 2 5

8 0 4 1

2 5 3 2

2 7 1 7

1 2 4

Find the matrix of cofactors and adjoint of the matrix



1 2 3

••M5-8

6 1 6 3

4 0 6



7 8 9

Find the inverse of original matrix of Problem M5-7 and verify its correctness.

Bibliography Childress, R. L. Sets, Matrices, and Linear Programming. Upper Saddle River, NJ: Prentice Hall, 1974.

Reiner, I. Introduction to Matrix Theory and Linear Algebra. New York: Holt, Rinehart and Winston, Inc., 1971.

M O D U L E

6

The Binomial Distribution

LEARNING OBJECTIVES After completing this module, students will be able to: 1. Describe a bernoulli process. 2. Use the binomial table to solve problems. 3. Apply the binomial probability formula.

MODULE OUTLINE M6.1 Introduction M6.2 Solving Problems with the Binomial Formula M6.3 Solving Problems with Binomial Tables Discussion Questions and Problems • Case Study: WTVX

M6-1

M6-2

Module 6

THE BINOMIAL DISTRIBUTION

M6.1 INTRODUCTION ● Many business experiments can be characterized by the Bernoulli process, which follows the binomial probability distribution. In order to be a Bernoulli process, an experiment must have the following characteristics: 1. Each trial in a Bernoulli process has only two possible outcomes—either yes or no, success or failure, heads or tails, pass or fail, and so on. 2. Regardless of how many times the experiment is performed, the probability of the outcome stays the same. 3. The trials are statistically independent. 4. The number of trials is known and is either 1, 2, 3, 4, 5, and so on. To analyze a Bernoulli process, we need to know the values of: (1) the probability of success on a single trial, p, and the probability of a failure on a single trial, q (which equals 1  p); (2) the number of successes desired, r; and (3) the number of trials performed, n. A common example of a Bernoulli process is flipping a coin. If we wish to compute the probability of getting exactly 4 heads on 5 tosses of a fair coin, the Bernoulli process parameters are: p  probability of heads  .5 q  probability of tails (nonheads)  1  p  .5 r  number of successes desired  4 n  number of trials performed  5 There are two ways of solving these Bernoulli problems to find the desired probabilities. The first is to apply the formula, called the binomial probability formula, given in Equation M6-1. Probability of r successes in n trials 

n! prqnr r!(n  r)!

(M6-1)

The symbol ! means factorial. To compute 5!, for example, we just multiply 5  4  3  2  1  120. Likewise, 4!  4  3  2  1  24, 1!  1, and 0!  1. Although Equation M6-1 works well in small problems, it can become cumbersome when large values of n and r are inserted. The second method is to make use of binomial distribution tables. Both approaches are illustrated in the following sections.

M6.2 SOLVING PROBLEMS WITH THE BINOMIAL FORMULA ● Using the binomial probability formula, we can solve for the probability of getting exactly four heads in five tosses of a coin. p  .5

q  .5

r4

n5

Probability of r n! 5!  prqnr  (.5)4(.5)1 successes in n trials r!(n  r)! 4!(5  4)! 

54321 4 1 (.5) (.5) (4  3  2  1)(1)

M6.2 Solving Problems with the Binomial Formula

TABLE M6.1

M6-3

Binomial Probability Distribution

(NUMBER OF HEADS)

PROBABILITY 

(r)

5! (.5)r (.5)5r r!(5  r )!

0

.03125 

5! (.5)0(.5)50 0!(5  0)!

1

.15625 

5! (.5)1(.5)51 1!(5  1)!

2

.3125 

5! (.5)2(.5)52 2!(5  2)!

3

.3125 

5! (.5)3(.5)53 3!(5  3)!

4

.15625 

5! (.5)4(.5)54 4!(5  4)!

5

.03125 

5! (.5)5(.5)55 5!(5  5)!

or Probability 

120 (.0625)(.5)  .15625 (24)(1)

Thus, the probability that 4 tosses out of 5 will land heads up is .15625 or 16 percent. Using Equation M6-1, it is also possible to determine the entire probability distribution for a binomial experiment. The probability distribution of flipping a fair coin 5 times is shown in Table M6.1 and then graphed in Figure M6.1.

FIGURE M6.1 Binomial Probability Distribution When n  5, p  0.50

Probability of r, P(r)

.4

.3

.2

.1

0

0

1

2

3

4

Values of the Random Variable, r

5

M6-4

Module 6

THE BINOMIAL DISTRIBUTION

M6.3 SOLVING PROBLEMS WITH BINOMIAL TABLES ● MSA Electronics is experimenting with the manufacture of a new type of transistor that is very difficult to mass-produce at an acceptable quality level. Every hour a supervisor takes a random sample of 6 transistors produced on the assembly line. The probability that any one transistor is defective is considered to be .13. MSA wants to know the probability of finding 4 or more defects in the lot sampled. The elements in this problem would be: p  .13

r  4 defects

n  6 trials

The question posed may be easily answered by using a cumulative binomial distribution table. Such tables can be very lengthy. For the sake of brevity, we present in Table M6.2 only that portion of a binomial table corresponding to n  6. Other books may contain complete binomial tables for a broad range of n, r, and p values. Since the probability of MSA finding any one defect is .13, we look through the n  6 table until we find the column where p  .13. We then move down that column until we are opposite the r  4 row. The answer there is found to be .0034, which is a probability of .0034 that there are 4 or more defects in the sample. This value has been shaded in Table M6.2.

TABLE M6.2

A Sample Table for the Cumulative Binomial Distribution for n=6 P(R  rn, p)

P

.01

.02

.03

.04

.05

.06

.07

.08

.09

.10

1

.0585

.1142

.1670

.2172

.2649

.3101

.3530

.3936

.4321 .4686

2

.0015

.0057

.0125

.0216

.0328

.0459

.0608

.0773

.0952 .1143

.0002

.0005

.0012

.0022

.0038

.0058

.0085

.0118 .0159

.0001

.0002

.0003

.0005

.0008 .0013

R

3 4 5 P

.0001 .11

.12

.13

.14

.15

.16

.17

.18

.19

.20

1

.5030

.5356

.5664

.5954

.6229

.6487

.6731

.6960

.7176 .7397

2

.1345

.1556

.1776

.2003

.2235

.2472

.2713

.2956

.3201 .3446

3

.0206

.0261

.0324

.0395

.0473

.0560

.0655

.0759

.0870 .0989

4

.0018

.0025

.0034

.0045

.0059

.0075

.0094

.0116

.0141 .0170

5

.0001

.0001

.0002

.0003

.0004

.0005

.0007

.0010

.0013 .0016

R

6

.0001 (continued)

M6.3 Solving Problems with Binomial Tables

TABLE M6.2

(continued)

P

.21

.22

.23

.24

.25

.26

.27

.28

.29

.30

1

.7569

.7748

.7916

.8073

.8220

.8358

.8487

.8607

.8719 .8824

2

.3692

.3937

.4180

.4422

.4661

.4896

.5128

.5356

.5580 .5798

3

.1115

.1250

.1391

.1539

.1694

.1856

.2023

.2196

.2374 .2557

4

.0202

.0239

.0280

.0326

.0376

.0431

.0492

.0557

.0628 .0705

5

.0020

.0025

.0031

.0038

.0046

.0056

.0067

.0079

.0093 .0109

6

.0001

.0001

.0001

.0002

.0002

.0003

.0004

.0005

.0006 .0007

P

.31

.32

.33

.34

.35

.36

.37

.38

1

.8921

.9011

.9095

.9173

.9246

.9313

.9375

.9432

.9485 .9533

2

.6012

.6220

.6422

.6619

.6809

.6994

.7172

.7343

.7508 .7667

3

.2744

.2936

.3130

.3328

.3529

.3732

.3937

.4143

.4350 .4557

4

.0787

.0875

.0969

.1069

.1174

.1286

.1404

.1527

.1657 .1792

5

.0127

.0148

.0170

.0195

.0223

.0254

.0288

.0325

.0365 .0410

6

.0009

.0011

.0013

.0015

.0018

.0022

.0026

.0030

.0035 .0041

P

.41

.42

.43

.44

.45

.46

.47

.48

1

.9578

.9619

.9657

.9692

.9723

.9752

.9778

.9802

.9824 .9844

2

.7819

.7965

.8105

.8238

.8364

.8485

.8599

.8707

.8810 .8906

3

.4764

.4971

.5177

.5382

.5585

.5786

.5985

.6180

.6373 .6563

4

.1933

.2080

.2232

.2390

.2553

.2721

.2893

.3070

.3252 .3438

5

.0458

.0510

.0566

.0627

.0692

.0762

.0837

.0917

.1003 .1094

6

.0048

.0055

.0063

.0073

.0083

.0095

.0108

.0122

.0138 .0156

R

.39

.40

R

.49

.50

R

Source: Reprinted from Robert O. Schlaifer, Introduction to Statistics for Business Decisions, published by McGraw-Hill Book Company, 1961, by permission of the copyright holder, the President and Fellows of Harvard College.

Expected Value and Variance There is an easy way to compute the expected value and variance of the binomial distribution. The appropriate equations are: Expected value  np Variance  np(1  p)

(M6-2) (M6-3)

The expected value and variance for MSA Electronics can be computed as follows: Expected value  np  (6)(.13)  .78 Variance  np(1  p)  (6)(.13)(1  .13)  .6786

M6-5

M6-6

Module 6

THE BINOMIAL DISTRIBUTION

Discussion Questions and Problems Discussion Questions M6-1 What is the Bernoulli process? What probability distribution describes the Bernoulli process, and what conditions must be satisfied before this distribution can be used? M6-2 What type of distribution is the binomial distribution? What type of distribution is the normal distribution?

Problems •• M6-3 This year, Jan Rich, who is ranked number one in women’s singles in tennis, and Marie Wacker, who is ranked number three, will play 4 times. If Marie can beat Jan 3 times, she will be ranked number one. The two players have played 20 times before, and Jan has won 15 games. It is expected that this pattern will continue in the future. What is the probability that Marie will be ranked number one after this year? What is the probability that Marie will win all 4 games this year against Jan?

•• M6-4 Over the last two months, the Wilmington Phantoms have been encountering trouble with one of their star basketball players. During the last 30 games, he has fouled out 15 times. The owner of the basketball team has stated that if this player fouls out 2 times in their next 5 games, the player will be fined $200. What is the probability that the player will be fined? What is the probability that the player will foul out of all 5 games? What is the probability that the player will not foul out of any of the next 5 games?

•• M6-5 Wisconsin Cheese Processor, Inc., produces equipment that processes cheese products. Ken Newgren is particularly concerned about a new cheese processor that has been producing defective cheese crocks. The piece of equipment produces 5 cheese crocks during every cycle of the equipment. The probability that any one of the cheese crocks is defective is .2. Ken would like to determine the probability distribution of defective cheese crocks from this new piece of equipment. There can be 0, 1, 2, 3, 4, or 5 defective cheese crocks for any cycle of the equipment.

• M6-6 Refer to Problem M6-5. Determine the expected value and variance of the distribution described in Problem M6-3, using Equations M6-2 and M6-3.

•• M6-7 Natway, a national distribution company of home vacuum cleaners, recommends that its salespersons make only two calls per day, one in the morning and one in the afternoon. Twenty-five percent of the time a sales call will result in a sale, and the profit from each sale is $125. (a) Develop the probability distribution for sales during a five-day week. (b) Determine the mean and variance of this distribution. (c) What is the expected weekly profit for a salesperson?

Case Study WTVX WTVX, Channel 6, is located in Eugene, Oregon, home of the University of Oregon’s football team. The station was owned and operated by George Wilcox, a former Duck (University of Oregon football player). Although there were other television stations in Eugene, WTVX was the only station that had a weatherperson who was a member of the American Meteorological Society (AMS). Every night, Joe Hummel would be introduced as the only weatherperson in Eugene who was a member of the AMS. This was George’s idea, and he believed that this gave his station the mark of quality and helped with market share.

In addition to being a member of AMS, Joe was also the most popular person on any of the local news programs. Joe was always trying to find innovative ways to make the weather interesting, and this was especially difficult during the winter months when the weather seemed to remain the same over long periods of time. Joe’s forecast for next month, for example, was that there would be a 70% chance of rain every day, and that what happens on one day (rain or shine) was not in any way dependent on what happened the day before. One of Joe’s most popular features of the weather report was to invite questions during the actual broadcast. Ques-

Case Study

tions would be phoned in, and they were answered on the spot by Joe. Once a ten-year-old boy asked what caused fog, and Joe did an excellent job of describing some of the various causes. Occasionally, Joe would make a mistake. For example, a high school senior asked Joe what the chances were of getting 15 days of rain in the next month (30 days). Joe made a quick calculation: (70%)  (15 days/30 days)  (70%)(Z\x)  35%. Joe quickly found out what it was like being wrong in a university town. He had over 50 phone calls from scientists, mathematicians, and other university professors, telling him

M6-7

that he had made a big mistake in computing the chances of getting 15 days of rain during the next 30 days. Although Joe didn’t understand all of the formulas the professors mentioned, he was determined to find the correct answer and make a correction during a future broadcast. Discussion Questions 1. What are the chances of getting 15 days of rain during the next 30 days? 2. What do you think about Joe’s assumptions concerning the weather for the next 30 days?

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF