Goal Programming

February 6, 2019 | Author: Imane Khrubiche | Category: Linear Programming, Mathematical Optimization, Genetic Algorithm, Vector Space, Utility
Share Embed Donate


Short Description

Download Goal Programming...

Description

Goal Programming J a m e s P. I g n i z i o

C arlos Romero

Resource Resource Manag Managemen ementt Associate Associatess

Technical echnical Unive Universit rsityy of Madrid  Madrid 

I. INTROD INTRODUC UCTIO TION N II. HISTORIC HISTORICAL AL SKETCH SKETCH III. THE MULTIPLEX TIPLEX MODEL MODEL IV. IV. FORMS OF OF THE ACHIEVEM ACHIEVEMENT ENT FUNCTION FUNCTION V. GENERAL FORM FORM OF THE MULTIPLEX MULTIPLEX MODEL MODEL

GLOSSARY achievement function The function that serves to measure the achievement of the minimization of  unwanted goal deviation variables in the goal programming model. mathematical function that is to be goal function  A mathematical achieved at a specified level (i.e., at a prespecified “aspiration” “aspiration” level). goal program  A mathematical model, consisting of  linear or nonlinear functions and continuous or discrete variables, in which all functions have been transformed into goals. multiplex Originally this referred to the multiphase simplex algorithm algorithm employed to to solve linear linear goal programs. More recently it defines certain specific models and methods employed in multiple- or single-objective single-objective optimization in general. negative deviation The amount of deviation for a given goal by which it is less than the aspiration level.  positive deviation The amount of deviation for a given goal by which it exceeds the aspiration level. satisfice  An old Scottish word referring to the desire, in the real world, to find a practical solution to a given problem, rather than some utopian result for an oversimplified model of the problem.

GOAL PROGRAMMING, a powerful and effective

methodology for the modeling, solution, and analysis of problems having multiple and conflicting goals

Encyclopedia of Information Systems, Volume 2 Copyright 2003, Elsevier Science (USA). All rights reserved.

VI. THE MULTIDI MULTIDIMEN MENSION SIONAL AL DUAL VII. ALGORIT ALGORITHMS HMS FOR FOR SOLUTI SOLUTION ON VIII. GOAL PROGRAMMIN PROGRAMMING G AND UTILITY UTILITY OPTIMIZA OPTIMIZATION TION IX. IX. EXTE EXTENS NSIO IONS NS X. THE THE FUTU FUTURE RE

and objectives, has often been cited as being the “workhorse” of multiple objective optimization (i.e., the solution to problems having multiple, conflicting goals and objectives) as based on its extensive list of  successful applications in actual practice. Here we describe the method and its history, cite its mathematical models and algorithms, and chronicle its evolution from its original form into a potent methodology  that now incorporates techniques from artificial intelligence (particularly genetic algorithms and neural networks). The article concludes with a discussion of  recent extensions and a prediction of the role of goal programming in real-world problem solving in the 21st century.

I . I NT NT RO RO DU DU CT CT IO IO N A. Defini Definitio tions ns and Origin Origin Real-world decision problems—unlike problems—unlike those found in textbooks—involve multiple, conflicting objectives and goals, subject to the satisfaction of various hard and  constraints. In short, and as the experienced pracsoft constraints. titioner is well aware, problems that one encounters outside  the classroom are invariably massive, messy, changeable, complex, and resist treatment via con ventional approaches. Yet the vast majority of traditional approaches to such problems utilize conventional models and methods that idealistically and unrealistically (in most cases) presume the optimization of a single-objective subject to a set of rigid

489

490

Goal Programming

constraints. Goal programming was introduced in an attempt to eliminate or, at the least, mitigate this disquieting disconnect. Conceived and developed by   Abraham Charnes and William Cooper, goal programming was originally dubbed “constrained regression.” Constrained regression, in turn, was and is a powerful nonparametric method for the development  of regression functions (e.g., curve-fitting) subject to side constraints. Charnes and Cooper first applied constrained regression in the 1950s to the analysis of executive compensation. Recognizing that the method could be extended to a more general class of problems—that is, any quantifiable problem having multiple objectives and soft, as well as rigid constraints—Charnes and Cooper later renamed the method goal programming  when describing it within their classic 1961 two-volume text  Management Models and Industrial Applications of  Linear Programming.

and comfortably, into the concept of optimization or efficiency (i.e., nondominated solutions) as used by  more conventional forms of mathematical modeling. This is because, in goal programming, we seek a useful, practical, implementable, and attainable solution rather than one satisfying the mathematician’s desire for global optimality. (However, if one wishes, it is relatively trivial to develop efficient, or nondominated solutions for any goal programming problem. That  matter is briefly described in a section to follow.)

B. Philosophical Basis

• The analysis of executive compensation for General Electric during the 1950s • The design and deployment of the antennas for the Saturn II launch vehicle as employed in the  Apollo manned moon-landing program • The determination of a siting scheme for the Patriot Air Defense System • Decisions within fisheries in the United Kingdom • A means to audit transactions within the financial sector (e.g., for the Commercial Bank of Greece) • The design of acoustic arrays for U.S. Navy  torpedos • As well as a host of problems in the general areas of agriculture, finance, engineering, energy, and resource allocation.

The two philosophical concepts that serve to best distinguish goal programming from conventional (i.e., single-objective) methods of optimization are the incorporation of flexibility in constraint functions (as opposed to the rigid constraints of single-objective optimization) and the adherence to the philosophy of  “satisficing” as opposed to optimization. Satisficing, in turn, is an old Scottish word that defines the desire to find a practical, real-world solution to a problem— rather than a utopian, optimal solution to a highly  simplified (and very possibly oversimplified) model of  that problem. The concept of satisficing, as opposed to optimization, was introduced by Herbert Simon in 1956.  As a consequence of the principle of satisficing, the “goodness” of any solution to a goal programming problem is represented by an achievement  function, rather than the objective function of conventional optimization. The goal programming achievement function measures the degree of  nonachievement  of the problem goals. The specific way in which this nonachievement is measured characterizes the particular subtype of goal programming approach that is being employed, and may be defined so as to include the achievement of commensurable as well as noncommensurable goals. It should be emphasized that, because a goal programming problem is to be satisficed, it is possible that the solution derived may not fit, conveniently 

C. A Brief List of Applications Goal programming’s label as the “workhorse” of  multiple-objective optimization has been achieved by  its successful solutions of important real-world problems over a period of more than 50 years. Included among these applications are:

D. Overview of Material to Follow In this article, the topic of goal programming is covered in a brief but comprehensive manner. Sections to follow discuss the past, present, and future of goal programming as well as the models and algorithms for implementation. The reader having previous exposure to the original goal programming approach  will (or should) immediately notice the many significant changes and extensions that occurred during the 1990s. As just one example, powerful and practical hybrid goal programming and genetic algorithm modeling and solution methods will be discussed. Readers seeking more detailed explanations of any of the ma-

Goal Programming

terial covered herein are referred to the Bibliography  at the end of this article.

491

III. THE MULTIPLEX MODEL A. Numerical Illustrations

II. HISTORICAL SKETCH  As mentioned, goal programming was conceived by   Abraham Charnes and William Cooper nearly a half  century ago. The tool was extended and enhanced by  their students and, later, by other investigators, most  notably Ijiri, Jääskeläinen, Huss, Ignizio, Gass, Romero, Tamiz, and Jones. In its original form, goal programming was strictly  limited to linear multiple-objective problems. Ignizio, in the 1960s, extended the method to both nonlinear and integer models, developed the associated algorithms for these extensions, and successfully applied them to a number of important real-world problems, including, as previously mentioned, the design of the antenna systems for the Saturn II launch vehicle as employed in the  Apollo manned moon-landing program. During that  same period, and in conjunction with Paul Huss, Ignizio developed a sequential algorithm that permits one to extend—with minimal modification—any single-objective optimization software package to the solution of any  class of goal programming models (the approach was also developed, independently, by Dauer and Kruger). Later in that same decade, Ignizio developed the concept of the multidimensional dual, providing goal programming with an effective economic interpretation of its results as well as a means to support sensitivity and postoptimality analysis. Huss and Ignizio’s contributions in engineering, coupled with the work of Charnes, Cooper, Ijiri, Jääskeläinen, Gass, Romero, Tamiz, Jones, Lee, Olson, and others in management  science served to motivate the interest in multiple ob jective optimization that continues today. Goal programming is the most widely  applied  tool of multiple-objective optimization/multicriteria decision making. However, today’s goal programming models, methods, and algorithms differ significantly  from those employed even in the early 1990s. Goal programming, as discussed later, may be combined  with various tools from the artificial intelligence sector (most notably genetic algorithms and neural net works) so as to provide an exceptionally robust and powerful means to model, solve, and analyze a host of  real-world problems. In other words, today’s goal programming—while maintaining its role as the “workhorse” of multiple-objective decision analysis—is a much different tool than that described in most textbooks, even those published relatively recently.

 Any single-objective problem, and most multipleobjective ones, can be placed into a model format  that has been designated as the multiplex model, and then solved via the most appropriate version of a multiplex (or sequential goal programming) algorithm. For example, consider a conventional (albeit simple) linear programming problem taking on the following traditional form: Maximize z   10x 1  4x 2

(1)

Subject to:

x 1  x 2  100

(2)

x 2  4

(3)

0

(4)



Ignoring the fact that this undemanding singleobjective model can be solved by inspection, let us transform it into the multiplex form for the sake of illustration. To do so, we add a negative deviation variable to, and subtract a positive deviation variable from, each constraint. In addition, we transform the maximizing objective function into a minimizing form by  simply multiplying the original objective function by  a negative one. The resultant model, in multiplex form, can be written: Lexicographically minimize U

Satisfy:

 {(1   2), ( 10x 1  4x 2)}

(5)

x 1  x 2   1  1  100

(6)

x 2  2   2  4

(7)

, ,   0



(8)

The new variables (i.e., the negative and positive de viation variables, that have been added to the constraints) indicate that a solution to the problem may  result, for a given constraint  i , in a negative deviation (i   0), or a positive deviation ( i   0), or no deviation (i    i   0). That is to say that we can underachieve a goal (be it a hard or soft constraint), overachieve it, or precisely satisfy it. In the multiplex formulation, the deviation variables that are to be minimized have been shown in boldface, as well as appearing in the first (highest priority) term of the achievement function of function (5).  While this new formulation may appear unusual (at least to those schooled in traditional, singleobjective optimization), it provides an accurate

Goal Programming

492

representation of the linear programming problem originally posed. To appreciate this, examine the achievement function, as represented by formula (5). The multiplex achievement function is a vector, rather than a scalar as in conventional single-objective optimization (e.g., linear programming). The terms in this vector are ordered according to priority. The first term [i.e., 1  2 in function (5)] is reserved for the unwanted  deviation variables for all rigid  constraints, or hard goals—restrictions that supposedly  must  be satisfied for the solution to be deemed feasible. Any solution in which this first term takes on a  value of zero is thus—in math programming terms— a feasible solution. In goal programming, such a solution is deemed “implementable,” inferring that it  could actually be implemented in the real-world problem under consideration. Once the first term has been minimized, the next  term (the second term, or 10x 1  4x 2 in this case) can be dealt with. The algorithm will seek a solution that minimizes the value of this second term, but this must be accomplished without degrading the value al-  ready achieved in the higher priority term.  And this is the manner in which one seeks the lexicographic minimum of an ordered vector.  Again, this formulation may appear unusual but it  not only accurately represents the linear programming (LP) problem, it also indicates the way in which most  commercial software actually solves LP models. Specifically, LP problems are generally solved by the two-phase simplex algorithm, wherein the first phase attempts to find a feasible solution and the second seeks an optimal solution that does not degrade the feasibility achieved in phase 1. Multiplex algorithms simply extend this notion to any number of phases, according to the formulation employed to represent the given problem.

B. Multiplex Form of the Goal Programming Problem  While any single-objective optimization problem can be represented in a manner similar to that described above, our interest lies in multiple-objective optimization and, more specifically, in goal programming (GP). Consequently, let us examine the multiplex model for a specific GP problem. Consider the illustrative problem represented below, wherein there are two objective functions to be optimized, functions (9) and (10), and a set of constraints to be satisfied. For purposes of discussion, we assume that the constraints of (11) through (13) are hard, or rigid.

Maximize z 1  3x 1  x 2 (profit per time period) (9) Maximize z 2  2x 1  3x 2 (market shares captured per time period) (10) Satisfy: 2x 1  x 2  50 (raw material limitations) (11) x 1  20 (market saturation level, product 1) (12) x 2  30 (market saturation level, product 2) (13) x 

0

(nonnegativity conditions) (14) If the reader cares to graph this problem, he or she  will see that there is no way in which to optimize both objectives simultaneously—as is the case in virtually  any nontrivial, real-world problem. However, the purpose of goal programming is to find a solution, or solutions, that simultaneously  satisfice  all objectives. But  first these objectives must be transformed into goals. To transform an objective into a goal, one must assign some estimate  (usually the decision maker’s preliminary estimate) of the aspired level for that goal. Let’s assume that the aspiration level for profit [i.e., function (9)] is 50 units while that of market shares [i.e., function (10)] is 80 units. Consequently the multiplex model for the goal programming problem is shown below, wherein the two transformed objectives now appear as (soft) goals (19) and (20), respectively: Lexicographically minimize U  {(1   2   3), ( 4   5)} (15) Satisfy: 2x 1  x 2   1  1  50 (raw material limitations) (16) x 1   2  2  20 (market saturation level, product 1) (17) x 2   3  3  30 (market saturation level, product 2) (18)

3x 1  x 2  4   4  50 (profit goal) (19) 2x 1  3x 2  5   5  80 (market shares goal) (20) , ,   0



(nonnegativity conditions) (21) The multiplex model for the problem indicates—via the achievement function—that the first priority is to satisfy the hard goals of (16), (17), and (18). Note

Goal Programming

493

2. Non-Archimedean (also known as lexicographic, or preemptive goal programming) 3. Chebyshev (also known as fuzzy programming).

50

40 A

B

A. Archimedean Goal Programming

30

20

C D

10

0

0

Figure 1

10

20

30

40

50

The satisficing region for the example.

that the nonnegativity conditions of (21) will be implicitly satisfied by the algorithm. Once the deviation variables (i.e., 1  2  3) associated with those hard goals have been minimized (albeit not necessarily to a value of zero), the associated multiplex (or sequential GP) algorithm proceeds to minimize the unwanted deviations (i.e., 4  5) associated with the profit and market share goals,  while not degrading the values of any higher ordered achievement function terms. Figure 1 serves to indicate the nature of the goal programming problem that now exists. Note that the solid lines represent the constraints, or hard goals, while the dashed lines indicate the original objectives—now transformed into soft goals. It is particularly important to note that those solutions satisficing this problem form a region, bounded by points A, B, C, and D, and including all points within and on the edges of the bounded region. This contrasts with conventional optimization in  which the optimal solution is most often a single point. The achievement functions for the linear programming and goal programming illustrations posed previously represent but two possibilities from a large and growing number of choices. We describe a few of  the more common achievement functions in the next  section.

The achievement function for an Archimedean GP model consists of exactly two terms. The first term al ways contains all the unwanted deviation variables associated with the hard goals (rigid constraints) of the problem. The second term lists the unwanted deviation  variables for all soft goals (flexible constraints), each  weighted according to importance. Returning to our previous goal programming formulation, assume that  the market shares goal [i.e., function (20)] is considered, by the decision maker, to be twice as important as the profit goal. Consequently, the Archimedean form of the achievement function could be written as follows. Notice carefully that  5 has now been weighted by 2: Lexicographically minimize U  {(1   2   3), ( 4  25)} (22) Note that, as long as the unwanted deviations are minimized, we have achieved a satisficing solution. This may mean that we reach a satisficing solution, for our example, whether the profit achieved is 50 or more  units. This infers a one-sided measure of achievement. If we wish to reward an overachievement of the profit, then either the model should be modified or we should employ one of the more recent developments in achievement function formatting. The latter matter is discussed in a forthcoming section. For the moment, however, our assumption is that we simply seek a solution to the achievement function given in Eq. (22), one that provides a satisficing result. Realize that Archimedean, or weighted goal programming, makes sense only if you believe that numerical weights can be assigned to the nonachievement of each soft goal. In many instances, goals are noncommensurable and thus other forms of the achievement function are more realistic. One of these is the non-Archimedean, or lexicographic achievement function.

IV. FORMS OF THE ACHIEVEMENT FUNCTION B. Non-Archimedean Goal Programming The three earliest, and still most common forms of  the Multiplex achievement function are listed here and discussed in turn: 1. Archimedean (also known as weighted goal programming)

The achievement function for a non-Archimedean GP model consists of two or more terms. As in the case of the Archimedean form, the first term always contains the unwanted deviation variables for all the hard goals. After that, the deviation variables for all soft 

Goal Programming

494

goals are arranged according to priority—more specifically, a nonpreemptive priority. To demonstrate, consider the problem previously  posed as Archimedean GP. Assume that we are unable, or unwilling, to assign weights to the profit or market  share goals. But we are convinced that the capture of  market share is essential  to the survival of the firm. It  might then make sense to assign a higher priority to market share than to profit, resulting in the non Archimedean achievement function given here:

z k   the value of the function representing the k th ob jective (e.g., z 1  3x 1  x 2 and z 2  2x 1  3x 2).

Lexicographically minimize U  {(1   2   3), ( 5), ( 4)} (23)

(market saturation level, product 1)

Given the specific model of (24) through (30), the resulting Chebyshev formulation is simply: Minimize  Satisfy: 2x 1  x 2  50 (raw material limitations) x 1  20 x 2  30

 While the achievement function of Eq. (23) consists of but a single deviation variable for the second and third terms, the reader should understand that several deviation variables may appear in a given term— if you are able to weight each according to its perceived importance.

C. Chebyshev (Fuzzy) Goal Programming There are numerous forms of Chebyshev, or fuzzy, goal programming but we restrict our coverage to just  one in this subsection. The notion of Chebyshev GP is that the solution sought is the one that minimizes the maximum deviation from any single soft goal. Returning to our profit and market shares model, one possible transformation is as follows: Minimize 

(24)

Satisfy: 2x 1  x 2  50 (raw material limitations) (25) x 1  20 (market saturation level, product 1) (26) x 2  30 (market saturation level, product 2) (27)   (U 1  z 1)/(U 1  L 1)

(28)

  (U 2  z 2)/(U 2  L 2)

(29)

,

(30)



0

(market saturation level, product 2)   (70  3x 1  x 2)/(70  60)   (110  2x 1  3x 2)/(110  70) ,

0

This model may be easily transformed into the multiplex form by means of adding the necessary deviation  variables and forming the associated achievement  function. However, it is clear that the Chebyshev  model, as shown, is simply a single objective optimization problem in which we seek to minimize a single variable, . In other words, we seek to minimize the single worst deviation from any one of the problem goals/constraints.  While the Archimedean, non-Archimedean, and Chebyshev forms of the achievement function are the most common, other, newer versions may offer certain advantages. As mentioned, these newer forms of  the achievement function are briefly described in a later section on extensions of GP.

V. GENERAL FORM OF THE MULTIPLEX MODEL  Whatever the form of the achievement function, the multiplex model takes on the following general form: Lexicographically minimize U  {c(1)T  v , c(2)T  v , ..., c(K )T  v } (31) Satisfy: F( v )  b

 where: U k   the best possible value for objective k  (e.g., optimize the problem without regard to any other ob jectives but objective k ) L k   the worst possible value for objective k  (e.g., optimize the problem without regard to objective k )   a dummy variable representing the worst deviation level



 v   0

(32) (33)

 where: K   the total number of terms (e.g., priority levels) in the achievement function F( v )  the problem goals, of either linear or nonlinear form, and in which negative and positive deviation variables have been augmented

Goal Programming

495

b  the right-hand side vector  v  the vector of all structural (e.g., x  j ) and deviation (i.e., i  and i ) variables (k ) c  the vector of coefficients, or weights, of  v in the k th term of the achievement function (K )T   the transpose of  c(k ). c

 We designate functions (31) through (33) as the primal form of the muliplex model. Starting with this primal form, we may easily derive the dual of any goal programming, or multiplex, model.

VI. THE MULTIDIMENSIONAL DUAL The real power underlying conventional singleobjective mathematical programming, particularly linear programming, lies in the fact that there exists a dual for any conventional mathematical programming model. For example, the dual of a maximizing LP model, subject to type I () constraints, is a minimizing LP model, subject to type II () constraints. The property of duality allows one to exploit, for example, LP models so as to develop additional theories and algorithms, as well as provide a useful economic interpretation of the dual variables. One of the alleged drawbacks of goal programming has been the “lack of a dual formulation.” It is difficult to understand why this myth has endured as the dual of goal programming problems was developed, by Ignizio, in the early 1970s. It was later extended to the more general multiplex model in the 1980s. However, space does not permit any exhaustive summary of the multidimensional dual (MDD) and thus we present only a brief description.  We listed the general form of the multiplex model in (31) through (33). Simply for sake of discussion, let’s examine the dual formulation of a strictly  linear  multiplex model, taking on the primal form listed below: PRIMAL: Lexicographically minimize U  {c(1)T  v , c(2)T  v , ..., c(K )T  v } (34) Satisfy: Av   b  v   0

(35) (36)

 where: K   the total number of terms (e.g., priority levels) in the achievement function  v  the vector of all structural (e.g., x  j ) and deviation (i.e., i  and i ) variables (k ) c  the vector of coefficients, or weights, of  v in the k th term of the achievement function

 Av   b are the linear constraints and goals of the problem, as transformed via the introduction of  negative and positive deviation variables.

If you are familiar with single-objective optimization,  you may recall that the dual of a linear programming model is still a linear programming model. However, in the case of a GP, or multiplex model, its dual—the multidimensional dual—takes on the form of a model in  which (1) the “constraints” have multiple, prioritized right-hand sides and (2) the “objective function” is in the form of a vector. More specifically, the general form of the multidimensional dual is given as: DUAL: Find  Y  so as to lexicographically maximize  w   bT  Y  (37) Subject to: A T  Y  ⇐ c(1), c(2), ..., c(K )

(38)

 Y , the dual variables, are unrestricted and  multidimensional (39)

Note that the symbol ⇐ indicates the lexicographic nature of the inequalities involved (i.e., the left-hand side of each function is lexicographically less than or equal to the multiple right-hand sides). Physically, this means that   we first seek a solution subject to the first column of  right-hand side elements. Next, we find a solution sub ject to the second column of right-hand side elements— but one that cannot degrade the solution achieved for the previous column of right-hand side values. We continue in this manner until a complete set of solutions has been obtained for all right-hand side values. The transformation from primal to dual may be summarized as follows: • A lexicographically minimized achievement  function for the primal translates into a set of  lexicographically ordered right-hand sides for the dual. • If the primal is to be lexicographically minimized, the dual is to be lexicographically maximized. • For every priority level (achievement function term) in the primal, there is an associated vector of dual variables in the dual. • Each element of the primal achievement function corresponds to an element in the right-hand side of the dual. • The technological coefficients of the dual are the transpose of the technological coefficients of the primal. The development of the MDD for goal programming (or general multiplex) models leads immediately to

496

both a means for economic interpretation of the dual  variable matrix as well as supporting algorithms for solution. A comprehensive summary of both aspects, as well as numerous illustrative numerical examples, are provided in the references.

VII. ALGORITHMS FOR SOLUTION A. Original Approach  As noted, algorithms exist for the solution of goal programming (as well as any problem that can be placed into the multiplex format) problems in either the primal or dual form. The original emphasis of  goal programming was, as discussed, on linear goal programs, and the GP algorithms derived then (by  Charnes and Cooper, and their students) were “multiphase” simplex algorithms. That is, they were based on a straightforward extension of the two-phase simplex algorithm.

B. Serial Algorithms: Basic Approach  Assuming that a serial  algorithm (one in which only a single solution exists at a given time, as is the case  with the well-known simplex algorithm for linear programming) is used to solve a goal programming problem, the fundamental steps are as follows: Step 1. Transform the problem into the multiplex format. Step 2. Select a starting solution. [In the case of a linear model, the starting solution is often the one in  which all the structural variables (i.e., the x  j ’s) are set to zero.] Step 3. Evaluate the present solution (i.e., determine the achievement function vector). Step 4. Determine if a termination criterion has been satisfied. If so, stop the search process. If not, go to step 5. [Note that, in the case of linear models, the shadow prices (dual variable values) are used to determine if optimality has been reached.] Step 5. Explore the local region about the present best  solution to determine the best direction of movement. (For linear models, we move in the direction indicated by the best single shadow price.) Step 6. Determine how far to move in the direction of  best improvement, and then do so. (In linear models, this is determined by the so-called “Theta” or “blocking variable” rule.) Step 7. Repeat steps 3 through 6 until a termination rule is satisfied.

Goal Programming

C. Parallel Algorithms: Basic Approach The references provide details and illustrations of the use of such serial multiplex algorithms, for linear, nonlinear, and integer goal programming problems. However, with the advent of hybrid multiplex algorithms, the use of  parallel  algorithms is both possible and, in the minds of some, preferable.  A parallel algorithm follows somewhat the same steps as listed previously for serial algorithms. The primary difference is that rather than employing a single solution point at each step, multiple points—or populations of solutions—exist at each iteration (or “generation”) of the algorithm. There are two very important advantages to the employment of parallel algorithms in goal programming. The first is that, if  the algorithm is supported by parallel processors, the speed of convergence to the final solution is significantly increased. The second advantage is less well known, but quite likely even more significant. Specifically, it would appear from the evidence so far that  solutions to certain types  (more specifically, those employing evolutionary operations) of parallel algorithms are far more stable, and less risky, than those derived by conventional means.

D. Hybrid Algorithm Employing Genetic Algorithms The basics of a hybrid goal programming/genetic algorithm for solving multiplex models are listed below. Such an approach has been found, in actual practice, to achieve exceptionally stable solutions—and do so rapidly. Furthermore, it is particularly amenable to parallel processing. Step 1. Transform the problem into the multiplex format. Step 2. Randomly select a population of trial solutions (typically 20 to a few hundred initial solutions will compose the first generation). Step 3. Evaluate the present solutions (i.e., determine the achievement function vector for each member of the present population). Step 4. Determine if a termination criterion has been satisfied. If so, stop the search process. If not, go to step 5. Step 5. Utilize the genetic algorithm operations of selection, mating, reproduction, and mutation to de velop the next generation of solutions (see the references for details on genetic algorithms). Step 6. Repeat steps 3 through 5 until a termination rule is satisfied.

Goal Programming

The parallel algorithm developed via combining goal programming with genetic algorithms offers a convenient way in which to address any single or multiob jective optimization problem, be they linear, nonlinear, or discrete in nature. The single disadvantage is that global optimality cannot be ensured.

VIII. GOAL PROGRAMMING AND UTILITY OPTIMIZATION  Although goal programming is developed within a satisficing framework, its different variants can be interpreted from the point of view of utility theory as highlighted in this section. This type of analysis helps to clarify goal programming variant as well as to pro vide the foundations of certain extensions of the achievement function. Let us start with lexicographic goal programming,  where the noncompatibility between lexicographic orderings and utility functions is well known. To properly  assess the effect of this property on the pragmatic value of this approach, we must understand that the reason for this noncompatibility is exclusively due to the noncontinuity of preferences underlying lexicographic orderings. Therefore, a worthwhile matter of discussion would not be to argue against lexicographic goal programming because it implicitly assumes a noncontinuous system of preferences but to determine if the characteristics of the problem situation justify or not a system of continuous preferences. Hence, the possible problem associated with the use of the lexicographic variant  does not lie in its noncompatibility with utility functions, but in the careless use of this approach. In fact, in contexts where the decision maker’s preferences are clearly continuous, a model based on nonpreemptive  weights should be used. Moreover, it is also important  to note that a large number of priority levels can lead to a solution where every goal, except those situated in the first two or three priority levels, is redundant. In this situation, the possible poor performance of the model is not due to the lack of utility meaning of the achievement function but to an excessive number of  priority levels or to overly optimistic aspiration levels (i.e., close to the ideal values of the goals). Regarding weighted (Archimedean) goal programming, we know that this option underlies the maximization of a separable and additive utility function in the goals considered. Thus, the Archimedean solution provides the maximum aggregate achievement among the goals considered. Consequently, it  seems advisable to test the separability between attributes before the decision problem is modeled with the help of this variant.

497

Regarding Chebyshev goal programming, it is recognized that to this variant underlies a utility function  where the maximum (worst) deviation level is minimized. In other words, the Chebyshev option underlies the optimization of a MINMAX utility function, for which the most balanced solution between the achievement of the different goals is achieved. These insights are important for the appropriate selection of the goal programming variant. In fact, the appropriate variant should not be chosen in a mechanistic way but in accordance with the decision maker’s structure of preferences. These results also give theoretical support to the extensions of the achievement function to be commented on in the next section.

I X. E XT EN SI ON S A. Transforming Satisficing Solutions into Efficient Solutions The satisficing logic underlying goal programming implies that its formulations may produce solutions that do not fit classic optimality requirements like efficiency; that is, solutions for which at least the achievement of one of the goals can be improved without degrading the achievement of the others. However, if  one wishes, it is very simple to force the goal programming approach to produce efficient solutions. To secure efficiency, it is enough to augment the achievement function of the multiplex formulation  with an additional priority level where the sum of the  wanted deviation variables is maximized. For instance, in the example plotted in Fig. 1 the closed domain ABCD represents the set of satisficing solutions and the edge BD the set of satisficing and efficient solutions. Thus, if the achievement function of  the multiplex model (15) through (21) is augmented  with the term (4  5) placed in a third priority  level, then the lexicographic process will produce solution point B, a point that is satisficing and efficient  at the same time. Note that there are more refined methods capable of distinguishing the efficient goals from the inefficient ones, as well as simple techniques to restore the efficiency of the goals previously classified as inefficient. Technical details about this type of  procedure can be found in the Bibliography.

B. Extensions of the Achievement Function  According to the arguments developed in Section  VIII, from a preferential point of view the weighted

Goal Programming

498

and the Chebyshev goal programming solutions represent two opposite poles. Thus, since the weighted option maximizes the aggregate achievement among the goals considered, then the results obtained with this option can be biased against the performance achieved by one particular goal. On the other hand, because of the preponderance of just one of the goals, the Chebyshev model can provide results with poor aggregate performance between different  goals. The extreme character of both solutions can lead in some cases to possibly unacceptable solutions by the decision maker. A possible modeling solution for this type of problem consists of compromising the aggregate achievement of the Archimedean model with the MINMAX (balanced) character of  the Chebyshev model. Thus, the example of the earlier section can be reformulated with the help of the following multiplex extended goal programming model: Lexicographically minimize U  {(1  2   3), [(1  Z )  Z (4   5)]} (40) Satisfy: 2x 1  x 2   1  1  50 (raw material limitations) (41) x 1   2  2  20 (market saturation level, product 1) (42) x 2   3  3  30 (market saturation level, product 2) (43)

3x 1  x 2  4   4  50 (profit goal) (44) 2x 1  3x 2  5   5  80 (market shares goal) (45) (1  Z ) 4    0

(46)

(1  Z ) 5    0

(47)

, x , ,   0

(48)

 where the parameter Z  weights the importance attached to the minimization of the sum of unwanted deviation variables. For Z   0, we have a Chebyshev  goal programming model. For Z   1 the result is a  weighted goal programming model, and for other values of parameter Z belonging to the interval (0, 1) intermediate solutions between the solutions provided by the two goal programming options are considered. Hence, through variations in the value of parameter Z , compromises between the solution of the maximum aggregate achievement and the MINMAX solution can be obtained. In this sense, this extended formulation allows for a combination of goal programming variants that, in some cases, can reflect a

decision maker’s actual preferences with more accuracy than any single variant. Other extensions of the achievement function have been derived taking into account that in all traditional goal programming formulations there is the underlying assumption that any unwanted deviation  with respect to its aspiration level is penalized according to a constant marginal penalty. In other  words, any marginal change is of equal importance no matter how distant it is from the aspiration level. This type of formulation only allows for a linear relationship between the value of the unwanted deviation and the penalty contribution. This corresponds to the case of the achievement functions underlying a  weighted goal programming model or to each priority level of a lexicographic model. This type of function has been termed a one-sided penalty function, when only one deviation variable is unwanted, or V-shaped penalty function when both deviation variables are unwanted. However, other penalty function structures have been used. Thus, we have the two-sided penalty functions when the decision maker feels satisfied when the achievement of a given goal lies within a certain aspiration level inter val, or the U-shaped penalty function if the marginal penalties increase monotonically with respect to the aspiration levels. Several authors have proposed improvements and refinements to the goal programming model with penalty functions. Details about this type of modeling are described in the Bibliography.

C. Goal Programming and MCDM Goal programming is but one of several approaches possible within the broader field of multiple criteria decision making (MCDM). It is a common practice  within MCDM to present its different approaches in an independent, disjoint manner, giving the impression that each approach is completely autonomous. However, this is not the case. In fact, significant similarities exist among most of the MCDM methods. In this sense, the multiplex approach presented in Section V is a good example of a goal programming structure encompassing several single- and multipleobjective optimization methods. Furthermore, goal programming can provide a unifying basis for most MCDM models and methods. With this purpose, extended lexicographic goal programming has recently been proposed. To illustrate this concept, consider the representation provided in Eqs. (49) through (52):

Goal Programming

499

Lexicographically minimize U  {1 1   1



model decision-making problems for which a good representation of a decision maker’s preferences requires a mix of goal programming variants. In short, this type of general formulation can increase the enormous potential flexibility inherent to goal programming.

(i i    i i ) p , ...,  j   j 

i h 1

 j   (i i    i i ) p , ..., i h  j 

Q  Q    Q   (i  i    i i ) p }

(49)

i h Q 

X . T HE F UT UR E

Satisfy:  j  (i i    i i )   j   0 i   h  j   j   {1, ..., Q } (50)  f i  (x)   i    i   t i  , ,   0

i   {1, ..., q }

x



F

(51) (52)

 where  p  is a real number belonging to the interval [1, ∞) or ∞. Parameters i  and i  are the weights reflecting preferential and normalizing purposes attached to the negative and positive variables of the i th goal, respectively;  j  and  j  are control parameters; and h  j  represents the index set of goals placed in the  j th priority  level. The block of rigid constraints x  F, can be transferred to an additional first priority level in order to formulate the model within a multiplex format. If the above structure is considered the  primary  model, then it is easy to demonstrate that an important  number of multiple-criteria methods are just secondary  models  of the extended lexicographic goal programming model. Thus, the following multicriteria methods can be straightforwardly deduced just by applying different parameter specifications to the above model: 1. Conventional single-objective mathematical programming model 2. Nonlinear and linear weighted goal programming 3. Lexicographic linear goal programming 4. Chebyshev goal programming 5. Reference point method 6. Compromise programming (L1 bound) and (L bound or fuzzy programming with a linear membership function) 7. Interactive weighted Tchebycheff procedure. ∞

The use of goal programming as a unifying frame work seems interesting at least for the following reasons. The extended lexicographic goal programming model stresses similarities between MCDM methods that can help reduce gaps between advocates of different approaches. Moreover, this unifying approach can become a useful teaching tool in the introduction of  MCDM, thus avoiding the common presentation based upon a disjoint system of methods. Finally, the extended lexicographic goal programming approach allows us to

In the nearly half century since its development, goal programming has achieved and maintained its reputation as the workhorse of the multiple-objective optimization field. This is due to a combination of simplicity of form and practicality of approach. The more recent extensions to the approach have, thankfully, not negatively impacted on these attributes.  We envision a future in which goal programming  will continue to utilize methods from other sectors, particularly artificial intelligence. The most significant advances, from a strictly pragmatic perspective, involve combining goal programming with evolutionary search methods, specifically genetic algorithms. Such extensions permit the rapid development of exceptionally stable solutions—the type of solutions needed in most real-world situations.

SEE ALSO THE FOLLOWING ARTICLES Decision Support Systems • Evolutionary Algorithms • Executive Information Systems • Game Theory • Industry, Artificial Intelligence in • Model Building Process • Neural Net works • Object-Oriented Programming • Strategic Planning for/of Information Systems • Uncertainty 

BIBLIOGRAPHY Charnes, A., and Cooper, W. W. (1961). Management models and  industrial applications of linear programming,  Vols. 1 and 2. New York: John Wiley. Charnes, A., and Cooper, W. W. (1977). Goal programming and multiple objective optimization. Part I.  European Journal  of Operational Research, Vol. 1, 39–54. Dauer, J. P., and Kruger, R. J. (1977). An iterative approach to goal programming. Operational Research Quarterly,  Vol. 28, 671–681. Gass, S. I. (1986). A process for determining priorities and  weights for large- scale linear goal programming models.  Journal of the Operational Research Society, Vol. 37, 779–784. Ignizio, J. P. (1963). S-II trajectory study and optimum antenna placement, Report SID-63. Downey, CA: North American  Aviation Corporation. Ignizio, J. P. (1976). Goal programming and extensions. Lexington Series. Lexington, MA: D. C. Heath & Company.

500 Ignizio, J. P. (1985). Introduction to linear goal programming. Beverly Hills, CA: Sage Publishing. Ignizio, J. P., and Cavalier, T. M. (1994). Linear programming. Upper Saddle River, NJ: Prentice Hall. Markowski, C. A., and Ignizio, J. P. (1983). Theory and properties of the lexicographic LGP dual. Large Scale Systems, Vol. 5, 115–121. Romero, C. (1991). Handbook of critical issues in goal program-  ming. Oxford, UK: Pergamon Press. Romero, C. (2001). Extended lexicographic goal programming: A unifying approach. OMEGA, International Journal of  Management Science,  Vol. 29, 63–71.

Goal Programming Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review,  Vol. 63, 129–138. Tamiz, M., and Jones, D. (1996). Goal programming and Pareto efficiency.  Journal of Information and Optimization Sciences, Vol. 17, 291–307. Tamiz, M., Jones, D. F., and Romero, C. (1998). Goal programming for decision making: An Overview of the current  state-of-the-art.  European Journal of Operational Research, Vol. 111, 569–581.  Vitoriano, B., and Romero, C. (1999). Extended interval goal programming.  Journal of the Operational Research Society, Vol. 50, 1280–1283.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF