Discussion7_BI

 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Read the end-of-chapter application case “HP Applies Management Science Modeling to Optimize Its Supply Chain and Wins a Major Award” at the end of Chapter 10 in the textbook, and answer the following questions.

  1. Describe the problem that a large company, such as HP, might face in offering many product lines and options.
  2. Why is there a possible conflict between marketing and operations?
  3. Summarize your understanding of the models and the algorithms used in this case.
  4. What benefits did HP derive from implementation of these models?

Note: Need 400 words. PFA textbook.

CHAPTER

Modeling and Analysis: Heuristic
Search Methods and Simulation

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

LEARNING OBJECTIVES

• Explain the basic concepts of simulation
and heuristics, and when to use them

• Understand how search methods are
used to solve some decision support
models

• Know the concepts behind and
applications of genetic algorithms

• Explain the differences among
algorithms, blind search, and heuristics

• Understand the concepts and
applications of different types of
simulation

• Explain what is meant by system
dynamics, agent

based modeling, Monte
Carlo, and discrete event simulation

• Describe the key issues of model
management

I n this chapter, we continue to explore some additional concepts related to the model base, one of the major components of decision support systems (DSS). As pointed out in the last chapter, we present this material with a note of caution: The purpose
of this chapter is not necessarily for you to master the topics of modeling and analysis.
Rather, the material is geared toward gaining familiarity with the important concepts
as they relate to DSS and their use in decision making. We discuss the structure and
application of some successful time-proven models and methodologies: search methods,
heuristic programming, and simulation. Genetic algorithms mimic the natural process of
evolution to help find solutions to complex problems. The concepts and motivating appli

cations of these advanced techniques are described in this chapter, which is organized
into the following sections:

10.1 Opening Vignette: System Dynamics Allows Fluor Corporation to Better Plan
for Project and Change Management 43

6

10.2 Problem-Solving Search Methods 43

7

10.3 Genetic Algorithms and Developing GA Applications 441
10.4 Simulation 446

435

436 Pan IV • Prescriptive Analytics

10.5 Visu al Interactive Simulatio n 453
10.6 System Dynamics Modeling 45

8

10.7 Agents-Based Mode ling 461

10.1 OPENING VIGNETTE: System Dynamics Allows Fluor
Corporation to Better Plan for Project and Change
Management

INTRODUCTION

Fluor is an engineering and construction company with over 36,000 employers spread
over several countries worldwide . The company’s net income in 2009 amounted to
about $680 million based on total revenue o f $22 b illion. As part of its operations, Fluor
manages varying sizes of projects that are subject to scope changes, design changes, and
schedule changes.

PRESENTATION OF PROBLEM

Fluor estimated that changes accounted for about 20 to 30 percent of revenue . Most
changes were due to secondary impacts like ripple effects, disruptions, and p roductivity
loss. Previously, the changes were collated and reported at a later period and the burden
of cost allocated to the stakeholder responsible. In certain instances when late su rprises
abou t cost and project schedule are attributed to clients, it causes friction between
clients and Fluor, w hich eventually affect future business dealings. Sometimes, cost
impacts occur in such a time and fashion w hen it is difficult to take preventive measures.
The company determined that to improve on its efficiency, reduce legal ramification
w ith clients, and keep them happy it had to review its method of handling changes to
projects . One challenge the company faced was the fact that changes stayed extremely
remote from the situation , which warranted the change. In such a case, it is difficult to
determine the cause of a change , and it affects subsequent measures to handle related
change issues.

METHODOLOGY/SOLUTION

For sure, Fluor knew that one way o f combating the issue was to foresee and avoid
the events that might lead to ch anges. However, that alone would not be e nough
to solve the problem . The com pany needed to understand the dynamics of the
different situations that could warrant changes to project plans. Systems dynamics
was used as a base method in a three-part analytical solution for understanding the
dynamics be tween diffe re nt facto rs that could cause changes to be made. System
dynamics is a methodology and simulation-modeling technique for analyzing complex
systems using principles of cause and effect, feedback loops, and time-delayed and
nonlinear effects. Building tools for rapidly tailoring a solu tion to different situations
form the next part of the three-part analytical solution. In this part, industry standards
and company references are embedded. The p roject plan is also embedded as an input.
The model is then converged to simulate the correct amounts and timing of other factors
like staffing, project progress, productivity, and effects on p roductivity. The last p art of
the analytical solution was to deploy the project models to nonmodelers. Basically,
the system takes inputs that are specific to a particular project being worked and its
environment, such as the labor market. Some other input parameters, transformed into
nume rical data, a re re lated to p rogress curves, expe nses, and labor laws and constraints.

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 437

The resultant system provides reports on project impacts as well as helps perform
cause-effect diagnostics.

RESULTS/BENEFITS

With this system, customers are able to perform “what-if’ analysis even before a project is
started so the project performance can be gauged. Through diagnostics, the system also
helps explain why certain effects are realized based on impact to the project plan. Since
its development, Fluor has recorded over 100 extensive uses of their system dynamics
model and project simulation system. As an example, the model was used to analyze and
save $10 million in the future impact of changes to a mining project. Also, based on the
what-if capability of Fluor’s model, a company saved $10 million when the project team
used the model to redesign the process of reviewing changes so that the speed of the
company’s definition and approval procedures was increased.

QUESTIONS FOR THE OPENING VIGNETTE

1. Explain the use of system dynamics as a simulation tool for solving complex problems.
2. In what ways was it applied in Fluor Corporation to solve complex problems

?

3. How does a what-if analysis help a decision maker to save on cost?
4. In your own words, explain the factors that might have triggered the use of system

dynamics to solve change management problems in Fluor Corporation.
5. Pick a geographic region and business domain and list some corresponding relevant

factors that would be used as inputs in building such a system.

WHAT WE CAN LEARN FROM THIS VIGNETTE

Changes to project plans and timelines are a major contributing factor to upward increase
in cost from initial amount budgeted for projects. In this case, Fluor relied on system
dynamics to understand what, why, when, and how changes occurred to project plans.
The models that the system dynamics model produced helped them correctly quantify the
cost of projects even before they started. The vignette demonstrates that system dynamics
is still a credible and robust methodology in understanding business processes and creating
“what-if’ analyses of the impact of both expected and unexpected changes in project plans.

Source.- E. Godlewski, G. Lee, and K. Cooper, “System Dynamics Transforms Fluor Project and Change
Management,” Inteifaces, Vol. 42, No. 1, 2012, pp. 17-32.

10.2 PROBLEM-SOLVING SEARCH METHODS
We next turn to several well-known search methods used in the choice phase of problem
solving. These include analytical techniques, algorithms, blind searching, and heuristic
searching.

The choice phase of problem solving involves a search for an appropriate course
of action (among those identified during the design phase) that can solve the problem.
Several major search approaches are possible , depending on the criteria (or criterion) of
choice and the type of modeling approach used. These search approaches are shown
in Figure 10.1. For normative models , such as mathematical programming-based ones,
either an analytical approach is used or a complete, exhaustive enumeration (comparing
the outcomes of all the alternatives) is applied. For descriptive models, a comparison of a
limited number of alternatives is used, either blindly or by employing heuristics. Usually
the results guide the decision maker’s search.

438 Pan IV • Prescriptive Analytics

Optimization
[Analytical]

Complete
enumeration
[exhaustive]

Search Blind
approaches search

Partial
search

Heuristics

FIGURE 10.1 Formal Search Approaches.

Analytical Techniques

Generate improved
solutions or get the
best solution directly

All possible
solutions

are

checked

Check only some
alternatives:

Systematically drop
interior solutions

Only promising
solutions

are considered

Stop when no
improvement

is possible

Comparisons:
Stop when all

alternatives are
checked

Comparisons,
simulation:

Stop when solution
is good enough

Stop when solution
is good enough

Optimal [best]

Optimal [best]

Best among
alternatives

checked

Good enough

Analytical techniques use mathematical formulas to derive an

optimal

solution directly or
to predict a certain result. Analytical techniques are used mainly for solving structured
problems, usually of a tactical or operational nature, in areas such as resource allocation
or inventory management. Blind or heuristic search approaches generally are employed
to solve more complex problems.

Algorithms

Analytical techniques may use algorithms to increase the efficiency of the search. An
algorithm is a step-by-step search process for obtaining an optimal solution (see Figure 10.2).
(Note: There may be more than one optimum, so we say an optimal solution rather than

Is
improvement

possible in
proposed
solution

?

Yes

Improve solution:
Generate a new

proposed solution

FIGURE 10.2 The Process of Using an Algorithm.

No End-
Solution is

optimal

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 43

9

the optimal solution.) Solutions are generated and tested for possible improvements.
An improvement is made whenever possible, and the new solution is subjected to an
improvement test, based on the principle of choice (i.e., objective value found) . The process
continues until no further improvement is possible. Most mathematical programming
problems are solved by using efficient algorithms. Web search engines use various algorithms
to speed up searches and produce accurate results.

Blind Searching
In conducting a search, a description of a desired solution may be given. This is called a
goal. A set of possible steps leading from initial conditions to the goal is called the search
steps. Problem solving is done by searching through the possible solutions. The first of
these search methods is blind searching. The second is heuristic searching.

Blind search techniques are arbitrary search approaches that are not guided. There
are two types of blind searches: a complete enumeration, for which all the alternatives are
considered and therefore an optimal solution is discovered; and an incomplete, or partial,
search, which continues until a good-enough solution is found. The latter is a form of
suboptimization.

There are practical limits on the amount of time and computer storage available for
blind searches. In principle, blind search methods can eventually find an optimal solution
in most search situations, and, in some situations, the scope of the search can be limited;
however, this method is not practical for solving very large problems because too many
solutions must be examined before an optimal solution is found.

Heuristic Searching
For many applications, it is possible to find rules to guide the search process and reduce
the number of necessary computations through heuristics. Heuristics are the informal,
judgmental knowledge of an application area that constitute the rules of good judgment
in the field. Through domain knowledge, they guide the problem-solving process.
Heuristic programming is the process of using heuristics in problem solving. This
is done via heuristic search methods, which often operate as algorithms but limit the
solutions examined either by limiting the search space or stopping the method early.
Usually, rules that have either demonstrated their success in practice or are theoretically
solid are applied in heuristic searching. In Application Case 10.1, we provide an example
of a DSS in which the models are solved using heuristic searching.

Application Case 10.1
Chilean Government Uses Heuristics to Make Decisions on School Lunch Providers
The Junta Nacional de Auxilio Escolar y Becas
(JUNAEB), an agency of the Chilean government,
promotes integration and retention of socially
vulnerable children in the country’s school system.
JUNAEB’s school meal program provides meals
for approximately 10,000 schools. Decisions on
meal providers are made through an annual tender
using a combinatorial auction, where food industry

firms bid on supply contracts, based on a series of
disjoint, compact geographical areas called territorial
units (TUs). These territorial units consist of districts
spanning the country.

When the Chilean economy suffered a down-
turn, many competing meal service providers ceased
their operations. Thus, the number of suppliers
participating in the combinatorial auction was reduced.

(Continued)

440 Pan IV • Prescriptive Analytics

Application Case 10.1 (Continued)
The entire school meal policy was called into question.
The central problem was in defining TUs. JUNAEB
divided Chile’s 13 official regions, consisting of several
districts, into 136 TUs based on geographical criteria,
which attempted to equalize the number of meals
to be served in each TU. This process led to severe
disparities as the districts in regions requiring large
numbers of meals were assigned to a single TU; the
remaining districts were combined into TUs requiring
similar quantities of numbers of meals but for a possibly
larger geographical area and number of schools in
each district. Sometimes, a firm that ended up bagging
an attractive TU was paired with another unattractive
TU and, hence, was unable to fulfill its contract.

With realization of the need to determine new
configurations of tetTitorial units, homogenization of
characteristics across territorial units was achieved
based on a score that considered each constituent
district’s four characteristics: number of meals, number
of schools, geographic area, and accessibility. A series
of operating research methodologies was applied
toward reaching the goal of homogenization of TUs.

The analytic hierarchy process was first applied
to determine the relative weight of each of four
characteristics for each TU in each region, and then
total scores for each TU were calculated. Then a local
search heuristic was employed to find a set of homog-
enously attractive TUs within each region. The TU’s
attractiveness was calculated using the values derived
from the AHP process for each characteristic, and the
TU’s criterion weights were calculated for the local
search heuristic’s assessment in each region. The
degree of homogeneity was measured as the standard
deviation, which measures the dispersion of a TU’s
attractiveness level by quantifying the divergence
of each TU in a region from the regional average.
The heuristic attempts to minimize this measure by
exchanging the combination of districts in each TU
with the districts in other TUs existing in the same
region. The initial set of TUs in the region are defined
based on expert opinions. Then heuristics proceeds
by searching the local minima and approaching the
best solution by transferring districts from one TU to
another until a local minima is reached where the
combination of districts across all TUs separate the
TUs with lowest standard deviation.

The new configuration limited the minimum
and maximum number of meals for each TU between

15,000 and 40,000, and each of the 13 geographical
regions was assigned TUs accordingly. The districts
belonging to the TUs served as the basic units in
homogenizing the TUs. Each district in the TU that
served more than 10,000 meals was again divided
into an equal number of subdistricts.

An integer linear programming (ILP) model
was applied to the results generated by a cluster
enumeration algorithm, which formed TUs as
clusters created by grouping contiguous districts and
subdistricts into a TU. For each region, the ILP model
selected a set of clusters constituting a partition of
region that minimizes the difference between the
most and least attractive clusters based on the TU
scores that were calculated using the weights of
criteria used in a cluster.

Finally, a combination of ILP and heuristics
was applied in which the results obtained from
ILP were used as the initial solution on which local
search heuristics were applied. This further aimed
to reduce the standard deviation of attractiveness
scores of TUs.

Existing data about the TUs from 2007 was
used as the baseline, and the results from each of
three methodologies showed a significant level of
homogeneity that did not exist in the 2007 data.

QUESTIONS FOR DISCUSSION

1. What were the main challenges faced by JUNAEB7

2. What operation research methodologies were
employed in achieving homogeneity across
territorial units?

3. What other approaches could you use in this
case study?

What We Can Learn from This Application
Case
Heuristic methods can work best in providing solutions
for problems that involve exhaustive, repetitive
processes to arrive at a solution. The application case
also shows that combinations of operations research
methodologies can play a vital role in solving a
particular problem.

Source: D. M. G. Alfredo, E. N. R. David, M. Cristian, and
Z. V. G. Andres, “Quantitative Methods for a New Configuration
of Territorial Units in a Chilean Government Agency Te nder
Process,” Interfaces, 2011.

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 441

SECTION 10.2 REVIEW QUESTIONS

1. What is a search approach?
2. List the different problem-solving search methods.
3. What are the practical limits to blind searching?
4. How are algorithms and heuristic search methods similar? How are they different?

10.3 GENETIC ALGORITHMS AND DEVELOPING GA APPLICATIONS
Genetic algorithms (GA) are a part of global search techniques used to find approximate
solutions to optimization-type problems that are too complex to be solved with traditional
optimization methods (which are guaranteed to produce the best solution to a specific
problem). Genetic algorithms have been successfully applied to a wide range of highly
complex real-world problems, including vehicle routing (Baker and Syechew, 2003),
bankruptcy prediction (Shin and Lee, 2002), and Web searching (Nick and Themis, 2001).

Genetic algorithms are a part of the machine-learning family of methods under
artificial intelligence. Because they cannot guarantee the truly optimal solution, genetic
algorithms are considered to be heuristic methods. Genetic algorithms are sets of
computational procedures that conceptually follow the steps of the biological process
of evolution. That is, better and better solutions evolve from the previous generation of
solutions until an optimal or near-optimal solution is obtained.

Genetic algorithms (also known as evolutionary algorithms) demonstrate
self-organization and adaptation in much the same way that biological organisms do
by following the chief rule of evolution, survival of the fittest. The method improves the
solutions by producing offspring (i.e., a new collection of feasible solutions) using the best
solutions of the current generation as “parents.” The generation of offspring is achieved
by a process modeled after biological reproduction whereby mutation and crossover
operators are used to manipulate genes in constructing newer and “better” chromosomes.
Notice that a simple analogy between genes and decision variables and between
chromosomes and potential solutions underlies the genetic algorithm terminology.

Example: The Vector Game

To illustrate how genetic algorithms work, we describe the classical Vector game
(see Walbridge, 1989). This game is similar to MasterMind. As your opponent gives you clues
about how good your guess is (i.e., the outcome of the fitness function), you create a new
solution, using the knowledge gained from the recently proposed solutions and their quality.

Description of The Vector Game Vector is played against an opponent who secretly
writes down a string of six digits (in a genetic algorithm, this string consists of a
chromosome). Each digit is a decision variable that can take the value of either O or 1. For
example, say that the secret number that you are to figure out is 001010. You must try to
guess this number as quickly as possible (with the least number of trials) . You present a
sequence of digits (a guess) to your opponent, and he or she tells you how many of the
digits (but not which ones) you guessed are correct (i.e., the fitness function or quality
of your guess). For example, the guess 110101 has no correct digits (i.e ., the score = 0) .
The guess 111101 has only one correct digit (the third one, and hence the score = 1).

Default Strategy: Random Trial and Error There are 64 possible six-digit strings of
bina1y numbers. If you pick numbers at random, you will need, on average, 32 guesses
to obtain the right answer. Can you do it faster? Yes, if you can interpret the feedback
provided to you by your opponent (a measure of the goodness or fitness of your guess).
This is how a genetic algorithm works.

442 Pan IV • Prescriptive Analytics

Improved Strategy: Use of Genetic Algorithms The following are the steps in solving
the Vector game with genetic algorithms:

1. Present to your opponent four strings, selected at random. (Select four arbitrarily.
Through experimentation, you may find that five or six would be better.) Assume
that you have selected these four:
(A) 110100; score = 1 (i.e., one digit guessed correctly)
(B) 111101; score= 1
(C) 011011; score = 4
(D) 101100; score = 3

2. Because none of the strings is entirely correct, continue.
3. Delete (A) and (B) because of their low scores. Call (C) and (D) parents.
4. “Mate” the parents by splitting each number as shown here between the second and

third digits (the position of the split is randomly selected):
(C) 01:1011
(D) 10:1100
Now combine the first two digits of (C) with the last four of (D) (this is called
crossover). The result is (E), the first offspring:
(E) 011100; score = 3
Similarly, combine the first two digits of (D) with the last four of (C). The result is
(F), the second offspring:
(F) 101011; score= 4
It looks as though the offspring are not doing much better than the parents.

5. Now copy the original (C) and (D).
6. Mate and crossover the new parents, but use a different split. Now you have two

new offspring, (G) and (H):
(C) 0110:11
(D) 1011:00
(G) 0110:00; score = 4
(H) 1011:11; score = 3
Next, repeat step 2: Select the best “couple” from all the previous solutions to
reproduce. You have several options, such as (G) and (C). Select (G) and (F).
Now duplicate and crossover. Here are the results:
(F) 1:01011
(G) 0:11000
(I) 111000; score = 3
(J) 001011; score=

5

You can also generate more offspring:
(F) 101:011
(G) 011:000
(K) 101000; score = 4
(L) 011011; score = 4
Now repeat the processes with (J) and (K) as parents, and duplicate the crossover:
(J) 00101:l
(K) 10100:0
(M) 001010; score = 6
That’s it! You have reached the solution after 13 guesses. Not bad compared to the

expected average of 32 for a random-guess strategy.

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 443

Terminology of Genetic Algorithms

A genetic algorithm is an iterative procedure that represents its candidate solutions as
strings of genes called chromosomes and measures their viability with a fitness function.
The fitness function is a measure of the objective to be obtained (i.e., maximum or
minimum). As in biological systems, candidate solutions combine to produce offspring
in each algorithmic iteration, called a generation. The offspring themselves can become
candidate solutions. From the generation of parents and children, a set of the fittest
survive to become parents that produce offspring in the next generation. Offspring are
produced using a specific genetic reproduction process that involves the application of
crossover and mutation operators. Along with the offspring, some of the best solutions
are also migrated to the next generation (a concept called elitism) in order to preserve
the best solution achieved up until the current iteration. Following are brief definitions of
these key terms:

• Reproduction. Through reproduction, genetic algorithms produce new gen-
erations of potentially improved solutions by selecting parents with higher fitness
ratings or by giving such parents a greater probability of being selected to contribute
to the reproduction process.

• Crossover. Many genetic algorithms use a string of binary symbols (each cor-
responding to a decision variable) to represent chromosomes (potential solutions),
as was the case in the Vector game described earlier. Crossover means choosing
a random position in the string (e.g. , after the first two digits) and exchanging
the segments either to the right or the left of that point with those of another
string’s segments (generated using the same splitting schema) to produce two new
offspring.

• Mutation. This genetic operator was not shown in the Vector game example.
Mutation is an arbitrary (and minimal) change in the representation of a
chromosome. It is often used to prevent the algorithm from getting stuck in a local
optimum. The procedure randomly selects a chromosome (giving more probability
to the ones with better fitness value) and randomly identifies a gene in the
chromosome and inverses its value (from O to 1 or from 1 to 0), thus generating one
new chromosome for the next generation. The occurrence of mutation is usually set
to a very low probability (0.1 percent).

• Elitism. An important aspect in genetic algorithms is to preserve a few of
the best solutions to evolve through the generations. That way, you are guar-
anteed to end up with the best possible solution for the current application of
the algorithm. In practice, a few of the best solutions are migrated to the next
generation.

How Do Genetic Algorithms Work?

Figure 10.3 is a flow diagram of a typical genetic algorithm process. The problem to be
solved must be described and represented in a manner amenable to a genetic algorithm.
Typically, this means that a string of ls and Os (or other more recently proposed
complex representations) are used to represent the decision variables, the collection of
which represents a potential solution to the problem. Next, the decision variables are
mathematically and/ or symbolically pooled into afitnessfunction (or objective function).
The fitness function can be one of two types: maximization (something that is more is
better, such as profit) or minimization (something that is less is better, such as cost).
Along with the fitness function, all of the constraints on decision variables that collectively

444 Pan IV • Prescriptive Analytics

Next
generation
of solutions

j.

Elites

Offspring

Represent problem’s
chromosome structure

Generate initial solutions
(the initial generation)

Test:
Is

the solution

satisfactory?

No

Select elite solutions; carry
them into next generation

Select parents to reproduce;
apply crossover and mutation

FIGURE 10.3 A Flow Diagram of a Typical Genetic Algorithm Process.

Yes Stop-
Deploy

the solution

dictate whether a solution is a feasible one should be demonstrated. Remember that
only feasible solutions can be a part of the solution population. Infeasible ones are
filtered out before finalizing a generation of solutions in the iterations process. Once
the representation is complete, an initial set of solutions is generated (i.e., the initial
population). All infeasible solutions are eliminated, and fitness functions are computed
for the feasible ones. The solutions are rank-ordered based on their fitness values; those
w ith better fitness values are given more probability (proportional to their relative fitness
value) in the random selection process.

A few of the best solutions are migrated to the next generation. Using a random
process, several sets of parents are identified to take part in the generatio n of offspring.
Using the randomly selected parents and the genetic operators (i.e., crossover and
mutation), offspring are generated. The number of potential solutions to generate
is determined by the population size, which is an arbitrary parameter set prior to the
evolution of solutions. Once the next generation is constructed , the solutions go through
the evaluation and generation of new populations for a number of iterations. This
iterative process continues until a good-enough solution is obtained (an optimum is not
guaranteed), no improvement occurs over several generations, or the time/ iteration limit
is reached.

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 445

As mentioned, a few parameters must be set prior to the execution of the genetic
algorithm. Their values are dependent on the problem being solved and are usually
determined through trial and error:

• Number of initial solutions to generate (i.e. , the initial population)
• Number of offspring to generate (i.e., the population size)
• Number of parents to keep for the next generation (i.e., elitism)
• Mutation probability (usually a very low number, such as 0.1 percent)
• Probability distribution of crossover point occurrence (generally equally weighted)
• Stopping criteria (time/ iteration based or improvement based)
• The maximum number of iterations (if the stopping criteria are time/ iteration based)

Sometimes these parameters are set and frozen beforehand, or they can be varied
systematically while the algorithm is running for better performance.

Limitations of Genetic Algorithms

According to Grupe and Jooste (2004), the following are among the most important
limitations of genetic algorithms:

• Not all problems can be framed in the mathematical manner that genetic algorithms
demand.

• Development of a genetic algorithm and interpretation of the results require an
expert who has both the programming and statistical/mathematical skills demanded
by the genetic algorithm technology in use.

• It is known that in a few situations the “genes” from a few comparatively highly
fit (but not optimal) individuals may come to dominate the population, causing it
to converge on a local maximum. When the population has converged, the ability
of the genetic algorithm to continue to search for better solutions is effectively
eliminated.

• Most genetic algorithms rely on random-number generators that produce different
results each time the model runs. Although there is likely to be a high degree of
consistency among the runs, they may vary.

• Locating good variables that work for a pa1ticular problem is difficult. Obtaining the
data to populate the variables is equally demanding.

• Selecting methods by which to evolve the system requires thought and evaluation.
If the range of possible solutions is small, a genetic algorithm will converge too
quickly on a solution. When evolution proceeds too quickly, thereby altering good
solutions too quickly, the results may miss the optimum solution.

Genetic Algorithm Applications

Genetic algorithms are a type of machine learning for representing and solving complex
problems. They provide a set of efficient, domain-independent search heuristics for a
broad spectrum of applications, including the following:

• Dynamic process control
• Induction of optimization of rules
• Discove1y of new connectivity topologies (e.g., neural computing connections, neural

network design)
• Simulation of biological models of behavior and evolution
• Complex design of engineering structures
• Pattern recognition
• Scheduling
• Transportation and routing

446 Pan IV • Prescriptive Analytics

• Layout and circuit design
• Telecommunication
• Graph-based problems

A genetic algorithm interprets information that enables it to reject inferior solutions
and accumulate good ones, and thus it learns about its universe. Genetic algorithms are
also suitable for parallel processing.

Because the kernels of genetic algorithms are pretty simple, it is not difficult to write
computer codes to implement them. For better performance, software packages are available.

Several genetic algorithm codes are available for fee or for free (tty searching the Web
for research and commercial sites) . In addition, a number of commercial packages offer
online demos. Representative commercial packages include Microsoft Solver and XpertRule
GenAsys, an ES shell with an embedded genetic algorithm (see xpertrule.com). Evolver
(from Palisade Corp., palisade.com) is an optimization add-in for Excel. It uses a genetic
algorithm to solve complex optimization problems in finance, scheduling, manufacturing,
and so on.

SECTION 10.3 REVIEW QUESTIONS

1. Define genetic algorithm.
2. Describe the evolution process in genetic algorithms. How is it similar to biological

evolution?
3. Describe the major genetic algorithm operators.
4. List major areas of genetic algorithm application.
5. Describe in detail three genetic algorithm applications.
6. Describe the capability of Evolver as an optimization tool.

We now turn our attention to simulation, a class of modeling method that has
enjoyed significant actual use in decision making.

10.4 SIMULATION
Simulation is the appearance of reality. In MSS, simulation is a technique for conducting
experiments (e.g. , what-if analyses) with a computer on a model of a management system.

Typically, real decision-making situations involve some randomness. Because DSS
deals with semistructured or unstructured situations, reality is complex, which may not
be easily represented by optimization or other models but can often be handled by
simulation. Simulation is one of the most commonly used DSS methods. See Application
Cases 10.2 and 10.3 for examples. Application Case 10.3 illustrates the value of simulation
in a setting where sufficient time is not available to perform clinical trials.

Application Case 10.2
Improving Maintenance Decision Making in the Finnish Air Force Through Simulation
The Finnish Air Force wanted to gain efficiency in its
maintenance system in order to keep as many aircraft
as possible safely available at all times for training,
missions, and other tasks, as needed. A discrete
event simulation program similar to those used in
manufacturing was developed to accommodate

workforce issues, task times, material handling
delays, and the likelihood of equipment failure.

The developers had to consider aircraft
availability, resource requirements for international
operations, and the periodic maintenance program.
The information for normal conditions and conflict

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 447

conditions was input into the simulation program
because the maintenance schedule could be altered
from one situation to another.

The developers had to estimate some informa-
tion due to confidentiality, especially with regards
to conflict scenarios (no data of battle-damage
probabilities was available). They used several
methods to acquire and secure data, such as asking
experts in aircraft maintenance fields at different
levels for their opinions and designing a model
that allowed the confidential data to be input into
the system. Also, the simulations were compared to
actual performance data to make sure the simulated
results were accurate.

The maintenance program was broken into
three levels:

1. The organizational level, in which the fighter
squadron takes care of preflight checks, turn-
around checks (which occur when an aircraft
returns), and other minor repairs at the main
command airbase in normal conditions

2. The intermediate level, in which more compli-
cated periodic maintenance and failure repairs
are taken care of at the air command repair
shop at the main airbase in normal conditions

3. The depot level, in which all major periodic
maintenance is taken care of and is located
away from the main airbase

During conflict conditions, the system is decen-
tralized from the main airbase. The maintenance levels

Application Case 10.3
Simulating Effects of Hepatitis B Interventions
Although the United States has made significant
investments in healthcare, some problems seem to
defy solution. For example, a sizable proportion of
the Asian population in the United States is more
prone than others to the Hepatitis B viral disease.
In addition to the social problems associated with
the disease (like isolation), one out of every four
chronically infected individuals stands the risk of
suffering from liver cancer or cirrhosis if the disease
is not treated effectively. Managing this disease

just described may continue to do the exact same
repairs, or periodic maintenance may be eliminated.
Additionally, depending on need, supplies, and
capabilities, any of these levels may take care of any
maintenance and repairs needed at any time during
conflict conditions.

The simulation model was implemented using
Arena software based on the SIMAN language and
involved using a graphical user interface (GUI) that
was executed using Visual Basic for Applications
(VBA). The input data included simulation parameters
and the initial system state: characteristics of the air
commands, maintenance needs, and flight operations;
accumulated flight hours; and the location of each
aircraft. Excel spreadsheets were used for data input
and output. Additionally, parameters of some of
the input data were estimated from statistical data
or based on information from subject matter expe1ts.
These included probabilities for time between failures,
damage sustained during a single-flight mission, the
duration of each type of periodic maintenance, failure
repair, damage repair, the times between flight
missions, and the duration of a mission. This simula-
tion model was so successful that the Finnish Army,
in collaboration with the Finnish Air Force, has now
devised a simulation model for the maintenance for
some of its new transport helicopters.

Source: Based on V. Mattila, K. Vi1tanen, and T. Raivio, “Improving
Maintenance Decision Making in the Finnish Air Force Through
Simulation,” Inte,faces, Vol. 38, No. 3, May/June 2008, pp. 187- 201.

could be very costly. There are a number of control
measures, including screening, vaccination, and
treatment procedures. The government is reluctant
to spend money on any method of control if it is
not cost-effective and there is no proof of increased
health for people afflicted with the disease. Even
though not all the control measures are optimal
for all situations, the best method or combination
of methods for combatting the disease are not yet
known.

( Continued)

448 Pan IV • Prescriptive Analytics

Application Case 10.3 (Continued)

Methodology/Solution
A multidisciplinary team consisting of those with
medical, management science, and engineering back-
grounds developed a mathematical model using
operations research (OR) methods that determined the
right combination of control measures to be used to
combat Hepatitis B in the Asian and Pacific Island
populations. Normally, clinical trials are used in the
medical field to determine the best course of action in
disease treatment and prevention. Complicating this
situation is the unusually long period of time it takes
Hepatitis B to progress. Because of the high cost
that would accompany clinical trials in this situation,
operations research models and methods were used.
A combination of Markov and decision models
offered a more cost-effective way for determining
what combination of control measures to use at any
point in time. The decision model helps measure the
economic and health benefits of various possibilities
of screening, treatment, and vaccination. The Markov
model was used to model the progression of Hepatitis
B. The new model was created based on past
literature and expertise from one of the researchers
and draws from actual current infection and treat-
ment data. Policymakers built the new model using
Microsoft Excel because it is user friendly.

Results/Benefits
The resultant model was analyzed vis-a-vis existing
control programs in both the United States and
China. In the United States four strategies were
developed and compared to the existing strategy.
The four strategies are:

a. All individuals are vaccinated.
b. Individuals are first screened to determine

whether they have a chronic infection. If yes,
then they are treated.

c. Individuals are first screened to determine
whether they have a chronic infection. If they

have the infection, they are treated. In addition,
close associates of those infected are also
screened and vaccinated, if necessary.

d. Individuals are first screened to determine
whether they have a chronic infection or need
vaccination. If they are infected, they are treated.
If they need vaccination, they are vaccinated.

Results of the simulations indicated that
performing blood tests to determine chronic infection
and vaccinating associates of infected people are
cost-effective.

In China, the model helped design a catch-up
vaccination policy for children and adolescents.
This catch-up policy was compared with current
coverage levels of Hepatitis B vaccination. It was
concluded that when individuals under the age of
19 years are vaccinated, the health outcomes are
improved in the long run. In fact, this policy was
more financially cost-effective than the current
disease control policy in place at the time of the
evaluation.

QUESTIONS FOR DISCUSSION

1. Explain the advantage of operations research
methods such as simulation over clinical trial
methods in determining the best control measure
for Hepatitis B.

2. In what ways do the decision and Markov
models provide cost-effective ways of combating
the disease?

3. Discuss how multidisciplinary background is
an asset in finding a solution for the problem
described in the case.

4 . Besides healthcare, in what other domain could
such a modeling approach help reduce cost?

Source: D. W. Hutton, M. L. Brandeau, and S. K. So , “Doing
Good with Good OR: Supporting Cost-Effective Hepatitis B
Interventions,” Interfaces, Vol. 41 , No . 3, 2011, pp. 289-300.

Major Characteristics of Simulation

Simulation is not strictly a type of model; models generally represent reality, whereas
simulation typically imitates it. In a practical sense, there are fewer simplifications of
reality in simulation models than in other models. In addition, simulation is a technique

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 449

for conducting experiments. Therefore, it involves testing specific values of the decision
or uncontrollable variables in the model and observing the impact on the output variables.
At DuPont, decision makers had initia lly chosen to purchase more railcars; however, an
alternative involving better scheduling of the existing railcars was developed, tested, and
found to have excess capacity, and it ended up saving money.

Simulation is a descriptive rather than a normative method. There is no automatic
search for an optimal solution. Instead, a simulation model describes or predicts
the characteristics of a given system under different conditions. When the values of
the characteristics are computed, the best of several alternatives can be selected. The
simulation process usually repeats an experiment many times to obtain an estimate (and a
variance) of the overall effect of certain actions. For most situations, a computer simulation
is appropriate, but there are some well-known manual simulations (e.g., a city police
department simulated its patrol car scheduling with a carnival game wheel).

Finally, simulation is normally used only when a problem is too complex to be
treated using numerical optimization techniques. Complexity in this situation means either
that the problem cannot be formulated for optimization (e.g., because the assumptions do
not hold), that the formulation is too large, that there are too many interactions among
the variables, or that the problem is stochastic in nature (i.e., exhibits risk or uncertainty).

Advantages of Simulation

Simulation is used in decision support modeling for the following reasons:

• The theory is fairly straightforward.
• A great amount of time compression can be attained, quickly giving a manager some

feel as to the long-term Cl- to 10-year) effects of many policies.
• Simulation is descriptive rather than normative. This allows the manager to pose

what-if questions. Managers can use a trial-and-error approach to problem solving
and can do so faster, at less expense, more accurately, and with less risk.

• A manager can experiment to determine which decision variables and which parts
of the environment are really important, and with different alternatives.

• An accurate simulation model requires an intimate knowledge of the problem, thus
forcing the MSS builder to constantly interact with the manager. This is desirable
for DSS development because the developer and manager both gain a better
understanding of the problem and the potential decisions available.

• The model is built from the manager’s perspective.
• The simulation model is built for one particular problem and typically cannot solve

any other problem. Thus, no generalized understanding is required of the manager;
every component in the model corresponds to part of the real system.

• Simulation can handle an extremely wide variety of problem types, such as inven-
tory and staffing, as well as higher-level managerial functions, such as long-range
planning.

• Simulation generally can include the real complexities of problems; simplifications
are not necessary. For example, simulation can use real probability distributions
rather than approximate theoretical distributions.

• Simulation automatically produces many important performance measures.
• Simulation is often the only DSS modeling method that can readily handle relatively

unstructured problems.
• Some relatively easy-to-use simulation packages (e.g., Monte Carlo simulation)

are available. These include add-in spreadsheet packages (e.g. , @RISK), influence
diagram software, Java-based (and other Web development) packages, and the visual
interactive simulation systems to be discussed shortly.

450 Pan IV • Prescriptive Analytics

Disadvantages of Simulation

The primary disadvantages of simulation are as follows:

• An optimal solution cannot be guaranteed, but relatively good ones generally are
found.

• Simulation model construction can be a slow and costly process, although newer
modeling systems are easier to use than ever.

• Solutions and inferences from a simulation study are usually not transferable to
other problems because the model incorporates unique problem factors .

• Simulation is sometimes so easy to explain to managers that analytic methods are
often overlooked.

• Simulation software sometimes requires special skills because of the complexity of
the formal solution method.

The Methodology of Simulation

Simulation involves setting up a model of a real system and conducting repetitive
experiments on it. The methodology consists of the following steps, as shown in Figure 10.4:

1. Define the p roblem. We examine and classify the real-world problem, specifying
why a simulation approach is appropriate. The system’s boundaries, environment,
and other such aspects of problem clarification are handled here.

2 . Construct the simu lation m odel. This step involves determination of the variables
and their relationships, as well as data gathering. Often the process is described by
using a flowchart, and then a computer program is written.

3. Test and valida te the model. The simulation model must properly represent the
system being studied. Testing and validation ensure this.

4. Design the experiment. When the model has been proven valid, an experiment
is designed. Determining how long to run the simulation is part of this step. There
are two important and conflicting objectives: accuracy and cost. It is also prudent
to identify typical (e.g., mean and median cases for random variables), best-case
(e.g., low-cost, high-revenue), and worst-case (e.g ., high-cost, low-revenue)
scenarios. These help establish the ranges of the decision variables and environment
in which to work and also assist in debugging the simulation model.

5. Con duct the experiment. Conducting the experiment involves issues ranging
from random-number generation to result presentation.

– – – – – – – – – – – – – – – – – – – – – – – – – – – – –<- - - - - - - - - - - - - Change the real-world problem :

I

Define Construct Test and Design and Evaluate the Implement the validate the simulation the conduct the experiments’ the problem model model experiments results results

+

… … …
I

I I I I I I

‘———~———~———~———-~———~
Do-over-Feedback

FIGURE 10.4 The Process of Simulation.

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 451

6. Evaluate the results. The results must be interpreted. In addition to standard
statistical tools, sensitivity analyses also can be used .

7. Implement the results. The implementation of simulation results involves the
same issues as any other implementation. However, the chances of success are
better because the manager is usually more involved w ith the simulation process
than w ith other models. Higher levels of managerial involvement generally lead to
higher levels of implementation success.

Banks and Gibson (2009) presented some useful advice about simulation practices.
For example, they list the following seven issues as the common mistakes committed
by simulation modelers. The list, though not exhaustive, provides general directions for
professionals working on simulation p rojects.

• Focusing more on the model than on the problem
• Providing point estimates
• Not knowing when to stop
• Reporting what the client wants to hear rather than w hat the model results say
• Lack of understanding of statistics
• Confusing cause and effect
• Failure to replicate reality

In a follow-up article they provide additional guidelines. The reader should consult
this article: analytics-magazine.org/spring-2009/205-software-solutions-the-abcs-
of-simulation-practice.html

Simulation Types
As we have seen, simulation and modeling are used w hen pilot studies and experiment-
ing with real systems are expensive o r sometimes impossible . Simulation models allow
us to investigate various interesting scenarios before making any investment. In fact,
in simulations, the real-world operations are mapped into the simulation model. The
model consists of re latio nships and, consequently, equations that all together present
the real-world operations. The results of a simulation model, then, depend on the set of
parameters given to the model as inputs.

There are various simulation paradigms such as Monte Carlo simulation , discrete
event, agent based, or system dynamics. One of the factors that determine the type
of simulation technique is the level of abstraction in the problem. Discrete events and
agent-based models are usually used for middle or low levels of abstraction. They usually
consider individual elements such as people, parts, and products in the simulation models,
whereas systems dynamics is more appropriate for aggregate analysis.

In the following sections, we introduce several major types of simulation : proba-
bilistic simulation , time-dependent and time-independent simulation, visual simulation ,
system dynamics modeling, and agent-based modeling .

PROBABILISTIC SIMULATION In probabilistic simulatio n , one or more of the independent
variables (e.g., the demand in an inventory proble m) are probabilistic. They follow
certain probability distributions, which can be either discrete distributions or continuous
distributions:

• Discrete distributions involve a situation w ith a limited number of events (or variables)
that can take on only a finite number of values.

• Continuous distributions are situations w ith unlimited numbers of possible events
that follow density functions, such as the normal distribution .

The two types of distributio ns are shown in Table 10.1.

452 Pan IV • Prescriptive Analytics

TABLE 10.1 Discrete Versus Continuous Probability Distributions

Daily Demand

5
6
7
8
9

Discrete Probability

.10

. 15

.30

.25

.20

Continuous Probability

Daily demand is normally distributed with a mean
of 7 and a standard deviation of 1.2 .

TIME-DEPENDENT VERSUS TIME-INDEPENDENT SIMULATION Time-independent refers to
a situation in which it is not impo rtant to know exactly w hen the event occurred. For
example, we may know that the demand for a certain product is three units per day,
but we do not care when during the day the item is demanded. In some situations, time
may not be a factor in the simulation at all , such as in steady-state plant control design.
However, in waiting-line problems applicable to e-commerce, it is important to know
the precise time of arrival (to know w hether the customer will have to wait). This is a
time-dependent situation.

Monte Carlo Simulation

In most business decision problems, we usually employ one of the following two types
of probabilistic simulations. The most common simulation method for business decision
problems is Monte Carlo simulation. This method usually begins with building a model
of the decision problem w ithout having to consider the uncertainty of any variables. Then
we recognize that certain parameters or variables are uncertain or follow an assumed or
estimated probability distribution. This estimation is based upon analysis of past data .
Then we begin running sampling experiments. Running sampling experiments consists
of generating random values of uncertain parameters and then computing values of the
variables that are impacted by such parameters o r variables. These sampling experiments
essentially amount to solving the same model hundreds or thousands of times. Then we
can analyze the behavior of these dependent or performance variables by examining
their statistical distributions. This method has been used in simulations of physical as
well as business systems. A good public tutorial on the Monte Carlo simulation method
is available on Palisade.com (palisade.com/risk/monte_carlo_simulation.asp) .
Palisade markets @RISK, a popular spreadsheet-based Monte Carlo simulation software.
Another popular software in this category has been Crystal Ball, now marketed by
Oracle as Oracle Crystal Ball. Of course, it is a lso possible to build and run Monte Carlo
experiments w ithin an Excel spreadsheet w ithout using any add-on software such as the
two just mentioned. But these tools make it more convenient to run such experime nts in
Excel-based models. Monte Carlo simulation models have been used in many commercial
applications. Examples include Procter & Gamble using these models to determine
hedging foreign-exchange risks; Lilly using the model for deciding optimal plant capacity;
Abu Dhabi Water and Electricity Company using @Risk for forecasting water demand in
Abu Dhabi; and literally thousands of other actual case studies. Each of the simulation
software companies’ Web sites includes many such success stories.

One DSS modeling language Planners Lab that was mentioned in Chapter 2 (and
is available online for free for academic use) also includes significant Monte Carlo simu-
lation capabilities. The reader is urged to review the online tutorial for Planners Lab to

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 453

appreciate how easy it can be to build and run Monte Carlo simulation models for analyz-
ing the uncertainty in a problem.

Discrete Event Simulation

Discrete event simulation refers to building a model of a system where the interaction
between different entities is studied . The simplest example of this is a shop consisting of a
server and customers. By modeling the customers arriving at various rates and the server
serving at various rates, we can estimate the average p erformance of the system, waiting
time, the number of waiting custome rs, etc. Such systems are viewed as collectio ns of
customers, queues, and servers. There are thousands of documented applications of
discrete event simulation models in engineering, business, etc. Tools for building discre te
event simulation models have been around for a long time, but these have evolved to
take advantage of developments in graphical capabilities for building and understanding
the results of such simulation models . We will discuss this modeling method further in
the next section.

VISUAL SIMULATION The graphical display of computerized results, which may include
animation , is one of the most successful developments in computer-human interaction
and problem solving . We describe this in the next section .

SECTION 10.4 REVIEW QUESTIONS

1. List the characteristics of simulation.
2. List the advantages and disadvantages of simulation.
3. List and describe the steps in the methodology of simulation.
4. List and describe the types of simulation.

10.5 VISUAL INTERACTIVE SIMULATION
We next examine methods that show a decision maker a representation of the decision-
making situation in action as it nms through scenarios of the various alternatives. These
powerful methods overcome some o f the inadequacies of conventional methods and
help build trust in the solution attained because they can be visualized directly .

Conventional Simulation Inadequacies

Simulation is a well-established, useful, descriptive, mathematics-based method for
gaining insight into complex decision-making situations. However, simulation does not
usually allow decision makers to see how a solution to a complex p roblem evolves over
(compressed) time, nor can decision makers interact w ith the simulation (which would
be useful for training purposes and teaching) . Simulation generally reports statistical
results at the end of a set of experiments. Decision makers are thus not an integral part of
simulation development and experimentation , and their experience and judgment cannot
be used directly. If the simulation results do not match the intuition or judgment of the
decision maker, a confidence gap in the results can occur.

Visual Interactive Simulation
Visual interactive simulation (VIS), also known as visual interactive modeling (VIM)
and visual interactive p roblem solving, is a simulation method that lets decision makers see
what the model is doing and how it interacts with the decisions made, as they are made.
The technique has been used with great success in ope rations management DSS. The user

454 Pan IV • Prescriptive Analytics

can employ his or her knowledge to determine and try different decision strategies while
interacting with the model. Enhanced learning, about both the problem and the impact
of the alternatives tested, can and does occur. Decision makers also contribute to model
validation. Decision makers who use VIS generally support and trust their results.

VIS uses animated computer graphic displays to present the impact of different
managerial decisions. It differs from regular graphics in that the user can adjust the
decision-making process and see the results of the intervention. A visual model is a
graphic used as an integral part of decision making or problem solving, not just as a
communication device. Some people respond better than others to graphical displays,
and this type of interaction can help managers learn about the decision-making
situation.

VIS can represent static or dynamic systems. Static models display a visual image
of the result of one decision alternative at a time. Dynamic models display systems
that evolve over time, and the evolution is represented by animation. The latest visual
simulation technology has been coupled w ith the concept of virtual reality, where an
artificial world is created for a number of purposes, from training to entertainment to
viewing data in an artificial landscape. For example, the U.S. military uses VIS systems so
that ground troops can gain familiarity with terrain or a city in order to very quickly orient
themselves. Pilots also use VIS to gain familiarity with targets by simulating attack runs.
The VIS software can also include GIS coordinates.

Visual Interactive Models and DSS

VIM in DSS has been used in several operations management decisions. The method
consists of priming (like priming a water pump) a visual interactive model of a plant
(or company) with its current status. The model then runs rapidly on a computer, allowing
managers to observe how a plant is likely to operate in the future .

Waiting-line management (queuing) is a good example of VIM. Such a DSS
usually computes several measures of performance for the various decision alternatives
(e.g., waiting time in the system). Complex waiting-line problems require simulation.
VIM can display the size of the waiting line as it changes during the simulation runs and
can also graphically present the answers to what-if questions regarding changes in input
variables. Application Case 10.4 gives an example of a visual simulation that was used
to explore the applications of RFID technology in developing new scheduling rules in a
manufacturing setting.

Application Case 10.4
Improving Job-Shop Scheduling Decisions Through RFID: A Simulation-Based Assessment
A manufacniring services provider of complex opti-
cal and electro-mechanical components seeks to
gain efficiency in its job-shop scheduling decision
because the current shop-floor operations suffer
from a few issues:

• There is no system to record when the work-
in-process (WIP) items actually arrive at or
leave operating workstations and how long
those WIPs actually stay at each workstation.

• The current system cannot monitor or keep
track of the movement of each WIP in the
production line in real time.

As a result, the company is facing two main
issues at this production line: high backlogs and high
costs of overtime to meet the demand. Additionally,
the upstream cannot respond to unexpected
incidents such as changes in demand or material
shortages quickly enough and revise schedules in a

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 455

cost-effective manner. The company is considering
implementing RFID on a production line. A discrete
event simulation program is then developed to
examine how track and traceability through RFID can
facilitate job-shop production scheduling activities.

The visibility-based scheduling (VBS) rule that
utilizes the real-time traceability systems to track
those WIPs, parts and components, and raw materi-
als in shop-floor operations is proposed. A simula-
tion approach is applied to examine the benefit of
the VBS rule against the classical scheduling rules:
the first-in-first-out (FIFO) and earliest due date
(EDD) dispatching rules. The simulation model is
developed using Sirnio™. Sirnio is a 3D simulation
modeling software package that employs an object-
oriented approach to modeling and has recently
been used in many areas such as factories, supply
chains, healthcare, airports, and service systems.

Figure 10.5 presents a screenshot of the SIMIO
interface panel of this production line. The parameter
estimates used for the initial state in the simulation
model include weekly demand and forecast, process
flow, number of workstations, number of shop-floor

·–·-…….. IC ,.. .. ·-··-_,.., —.. ._,..
o-

FIGURE 10.5 SIMIO Interface View of the Simulation System.

operators, and operating time at each workstation.
Additionally, parameters of some of the input data
such as RFID tagging time, information retrieving time,
or system updating time are estimated from a pilot
study and from the subject matter experts. Figure 10.6
presents the process view of the simulation model
where specific simulation commands are implemented
and coded. Figures 10.7 and 10.8 present the standard
report view and pivot grid report of the simulation
model. The standard report and pivot grid format
provide a very quick method to find specific statistical
results such as average, percent, total, maximum, or
minimum values of variables assigned and captured
as an output of the simulation model.

The results of the simulation suggest that an
RFID-based scheduling rule generates better perfor-
mance compared to traditional scheduling rules with
regard to processing time, production time, resource
utilization, backlogs, and productivity.

Source: Based on ]. Chongwatpol and R. Sharda, “RFID-Enabled
Track and Traceability in Job-Shop Scheduling Environment,”
European j ournal of Operational Research, Vol. 227, No. 3 , pp.
453-463, 2013 http://dx.doi.org/10.1016/j.ejor.2013.0l.009.


(Continued)

456 Pan IV • Prescriptive Analytics

Application Case 10.4 (Continued)

–….. ……..

FIGURE 10.6 Process View of the Simulation Model.

,..,.._ …… “‘””‘-
0 0 Cl) …….. “-.,,.., _.._.,,_ • :.u:. ::
“”‘ ,.. o~—·—-·——– —…….. –

Interactive Detail Report
P’*c EJ0II Tract1blllty Modtl VLRFD
~I: ~I (Academk, COMM’-RCW. U91r:
PROHIBITED)
ktMnOC ….,klflft R””)
.. .,….. . A..,,ot(Molll’I )
Oliect….. On1klurc:e T……., f”a–.C
._,._ OmlfrtflCM
OlleCUt- D1ta S-c:t

._.._.. AVffaOIO~I
0111ec1– Olt1Souru T- –

FIGURE 10.7 Standard Report View.


Run Datt: 2/2el13 00:20
AmfystNanw:

·-·-….

·–t s.. …. .._. ·–·-

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 457

Proiect Home PlvotGnd 0 0 0 Step Tmelhts: Hal.rs .
a) Fast-d Leng1h lnts: Meters .

Rtn Slop O Reset Rall! Lhts: E>Q,ort Metes Pl!J Hau” . R6ults
p . O.,ta

!;, Facility Processes l • Definitions ?’I Data .. Dashboard r l Re:,ulb l
Pan~ls (( F, ……. Ul’lP Here

~ -age I ler
Pivot Gnd @,iectrype _JI a,i.ctName JI O.,ta 5ou’ce ][category l~ta[tem I Stabslic —-=:J ~ age Total I

d:::: Combiner PCBASo.-dAsserr •• . [ltesou’ce] Capacity Sd>ecUedJtlz… I Peant
lkitsAloal2d Total

Reports lkitsSche

LhtsUlittd A…-age
Maxlrun

R•so..rceState PrncessilgTme A…-ageObn)
<>aim:na,s

Pe-cent
Total (.Hou’s)

Staniedrime Average ()icu-s)

Ocnnences
Pean!
TotalObn)

Bab:mg MemberQue,e luroerWaitilg Average
Mamun

Tllll!Waiting Average (Holn)
Mamun (lio

Paren!Qucue luroerWaitng A…-age
Mamun

TineWalilg Average Obn)
I Mamun Oio

– Oiao.nl
Momber~tsuffer Content ~!nStatian I Average -Mamun -Holdi,gTrne Tine!nStation A-Oiao.n)

Maxlrun Oio

0 Endofru, _ .. (

The VIM approach can also be used in conjunction w ith artificial intelligence.
Integration of the two techniques adds several capabilities that range from the ability to
build systems graphically to learning about the dynamics of the system. These systems,
especially those developed for the military and the video-game industry, have “thinking”
characters who can behave with a relatively high level of intelligence in their interactions
with users.

Simulation Software
Hundreds of simulation packages are available for a variety of decision-making situations.
Many run as Web-based systems. ORMS Today publishes a periodic review of simulation
software . One recent review is located at orms-today.org/surveys/Simulation/
Simulation.html (accessed February 2013). PC software packages include Analytica

)0,2941J

29

1.0000

LOOOO

1.0000

0.3029

1.0000

0.0116

291.0000

)0,2941

12117’6
0.0955

292.0000

69.7059

27.8824
0.6816

LOOOO

0.0904
1.9591

0 .0000

38.1148

260.0000

5,2391

11.5623
0.0000

0 ,6816

1.0000

0.0901

L9591 — –


[!]

458 Pan IV • Prescriptive Analytics

(Lumina Decision Systems, lumina.com) and the Excel add-ins Crystal Ball (now sold
by Oracle as Oracle Crystal Ball, oracle.com) and @RISK (Palisade Corp., palisade.
com). A major commercial software for discrete event simulation has been Arena (sold
by Rockwell Intl. , arenasimulation.com). Original developers of Arena have now
developed Sirnio (simio.com), which was used in the screens shown above. Another
popular discrete event VIS software is ExtendSim (extendsim.com) . SAS has a graphical
analytics software package called JMP that also includes a simulation component in it.

For information about simulatio n software, see the Society for Modeling and
Simulation International (scs.org) and the annual software surveys at ORMS Today
Corms-today.com).

SECTION 10.5 REVIEW QUESTIONS

1. Define visual simulation and compare it to conventional simulation.
2. Describe the features of VIS (i.e., VIM) that make it attractive for decision makers.
3. How can VIS be used in operations management?
4. How is an animated film like a VIS application?

10.6 SYSTEM DYNAMICS MODELING
System dynamics was introduced in the opening vignette as a powerful method of
analysis. System dynamics models are macro-level simulation models in which aggregate
values and trends are considered. The objective is to study the overall behavior of a system
over time, rather than the behavior of each individual pa1ticipant or player in the system.
The other major key dimension is the evolution of the various components of the system
over time and as a result of interplay between the components over time . System dynam-
ics (SD) was first introduced by Forrester 0958) to address problems in industrial systems.
He later expanded his work and used system dynamics to model and simulate a classic
supply chain (1961). Since then, system dynamics has contributed to theory building,
problem solving, and research methodology. SD has been used with operations research
and management science approaches (Angerhofer & Angelides, 2000) where SD and
operations research are considered complementary techniques in which SD can provide a
more qualitative analysis for understanding a system, while operations research techniques
build analytical models of the problem. System dynamics has been used extensively in
the area of information technology, which usually changes an organization’s business
processes and behavior. Using system dynamics, possible changes in organizations are
projected and analyzed through conceptual models and simulations. The SD technique
also has been used in evaluating IT investments: Marquez and Blanchar (2006) developed
a system dynamics model to analyze a variety of investment strategies in a high-tech
company. Their simulation allows them to analyze strategies and trade-offs that are hard
to investigate in real cases. A system dynamics model can capture IT benefits that are
sometimes nonlinear and achieved over years.

To create an SD model, we need to draw causal loop diagrams for all processes
that lead to some benefits. This is a qualitative step in which the processes, variables,
and relationships w ithin the conceptual model are identified. These causal loop diagrams
are then transformed into mathematical equations that represent the relations among
variables. The equations and stock and flow diagrams are then used to simulate different
practical and theoretical scenarios.

Causal loop diagrams show the re lationships between variables in a system. A link
between two elements shows that changes in one e lement lead to changes in the other
one . The direction of the link shows the direction of influence between two elements.
The sign of each arrow shows the direction of change between each pair of elements.

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 459

A pos1t1ve sign means both elements change in the same direction while a negative
sign means the elements change in opposite directions. Feedback processes in the
causal loops are the key components by which a variable re-affects itself over time
through a chain of causal relationships.

We illustrate a basic application of system dynamics modeling through a partial
model of the impact of electronic health record (EHR) systems. This is based on Kasiri,
Sharda, and Asamoah (2010). Implementing electronic health record (EHR) systems is on
the agenda for many healthcare organizations in the next few years. Before investing in
an EHR system, however, decision makers need to identify and measure the benefits of
such systems. Using a system dynamics approach, it is possible to map complex relation-
ships among healthcare processes into a model by which one can dynamically measure
the effect of any changes in the parameters over time. Simulation of EHR implementations
using a system dynamics model produces useful data on the benefits of EHRs that are
hard to obtain through empirical data collection methods. The results of an SD model can
then be transformed into economic values to estimate financial performance.

Let us consider some of the factors that impact healthcare delivery in the hospital
as a result of the implementation of an electronic health records system. The causal loop
diagram in Figure 10.9 shows how different processes and variables interrelate in an
electronic health records system to offer significant benefits to healthcare delivery. The
sign on each arrow indicates the direction of change between each pair of elements. A
positive relationship means both elements change in the same direction while a negative
relationship means the elements change in opposite directions.

Electronic notes Ce-notes) and electronic prescribing Ce-Rx) are shown as two
common processes in EHRs that contribute to an increase in the amount of staff time saved

ADE correction
cost

FIGURE 10.9 Causal Loop Diagram for Effects of EHR. Source: From Kasiri et al., 2010.

460 Pan IV • Prescriptive Analytics

(McGowan et al. , 2008). They also contribute to a decrease in patient treatment time, which
is the time it takes for a patient to receive medical assistance starting from initial contact
w ith the receptionist to the time he or she leaves the hospital after receiving medical care
from the physician and other hospital staff. The average increase in patient treatmen t time
as a result of adverse drug events (ADEs) is 1.74 per occurrence (Classen et al. , 1997).
According to Anderson (2002), entering records directly entered into computer-based
medical information systems contributes to increased quality of care and reduces costs
re lated to ADEs. Hence, instead o f paper notes and paper prescriptions, docto rs can
reduce costs w hen notes on patients and prescriptions are entered directly into the EHR
system. Quality of care is directly affected by the amount of time a patient spends at the
hospital. Based on the diagram in Figure 10.9, there is a positive link between e-note
and staff time saved as well as e-Rx and staff time saved. This indicates that the more
physicians use the EHR system, the less time nurses and other staff need to manually
retrieve records and files on patients in order to offer medical support to them; in fact,
there is no need to transfer files and paper documents from one department to another
physically. Staff can therefore transfer the time saved on dealing with documentation
to having direct contact w ith the p atients and, hence, improve the quality of healthcare
given to patients and decrease ADEs.

E-note and e-Rx also impact the occurrence of adverse drug events (Garrido , 2005;
McGowan et al. , 2008). The more the system is utilized to record notes on patients and
to write prescriptions, the fewer the mistakes in the administration of drugs that stem
directly from inefficiencies in manual drug administration processes. Hence, patients
spend less time at the hospital as a result of not having to deal w ith delays related to
complications that could occur with paper notes and paper prescriptions. Also, “staff time
saved” is increased because the time needed to correct the mistakes related to ADEs is
eliminated . The occurrence of ADEs in hospitals is estimated to be an average of 6.5 events
per 100 hospitals (Bates et al. , 1995; Leape et al. , 1995). Subsequently, when the ADE rate
decreases through the use of e-note and e-Rx, ADE correction costs also decrease .

The electronic records storage Ce-storage) variable refers to the capability to store
records in the hospita l that otherwise would have been stored in paper format. E-storage
is important because it helps in easy retrieval of medical records of patients even after
many years. For instance, EHR enables the use of e-note and e-Rx, electronic forms of
paper notes and paper prescriptions, w hich are easier to store and retrieve than are data
in hard-copy formats. Hence, EHR helps facilitate the storage and retrieval of health
records. Access to the patient’s electronic health records helps physicians easily make
decisions and diagnoses based on past records. The delay link from e-storage to p atient
treatment time indicates that patients can be taken care of much faste r if electronic data
that offer quicker retrieval are available . Uncertainty in clinical decision making on the
part of physicians is greatly reduced as a result of e-storage capability (Garrido, 2005) .
Of course , electronic storage of this data is enhanced by greater use of e-Rx and e-note.

Hospitals are required to comply with ce1tain standards regarding the administra-
tion of medication and other related healthcare administration processes (Sidorov, 2006).
Certain drugs may be restricted, and the amount given to a particular patient must be
closely watched at any period in time by staff. With EHR, physicians can easily track
patients’ records to know how much has been given and what amount is yet to be given.
If an attempt is made to prescribe an amount that is more than the requisite amount
for that particular patient, a “red flag message” can be generated to warn the physician
of the imminent breach in compliance. In this way, it is easier to comply w ith regula-
tions regarding the dispensing of a particular medicine and ensure that the maximum
amount that is supposed to be given to the patient is not exceeded. Also, rules can be
set in the EHR system to prevent physicians from prescribing certain combinations of
drugs because of negative reactio ns such combinations may cause. If a particular rule

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 461

is violated during e-prescribing, a warning message can be immediately generated to
warn the physician of the imminent danger. ADEs that may occur as a result of incorrect
amounts and combinations of drugs given can hence be minimized.

The likelihood that any information system in an organization will be used is closely
related to how well the users are trained in using the system. Hence, when staff, including
nurses, physicians, and lab assistants, are given adequate periodic training, the use and
acceptance of e-Rx, e-note, and the EHR system in general increases. Training also leads
to greater compliance with standards.

In addition, when EHR is integrated with other healthcare delivery departments such
as the radiology and laborato1y departments, their performance level is increased. Greater
efficiency in the radiology and laboratory departments leads to fewer ADEs and shorter
patient treatment times. Also, using EHR reduces the rate of duplication in radiology
work and provides quicker access to radiology records and, hence, directly increases the
savings in staff time. With the EHR system, a functional department like the radiology
department can directly access the patient’s x-ray order through the e-note functionality.
Hence, mistakes related to incorrect interpretation of physicians’ handwritten orders can
be avoided, leading to a decrease in patient treatment time at the hospital.

The causal loop diagram shows various benefits of EHRs such as lower rate of
ADEs, higher amounts of staff time saved, and lower patient treatment times. In the next
section, we develop a stock and flow diagram with loops that reflect some of the most
important factors that impact the flows. These relationships and effects can be translated
into mathematical equations for simulation purposes. Based on estimated parameters and
initial values, we simulate the model and discuss the results.

Because the goal of this section is only to introduce some concepts of system dynamics
simulation, we will not go into all the details of the technique. Once the causal loop
diagrams are built, one can build the stock and flow diagrams, which lead to developing
the mathematical equations for simulating the behavior of the underlying system under
study. Results can provide considerable insight into the growing behavior of the system
under consideration. In another project, Kasiri and Sharda (2012), for example, studied
the effects of introducing radio-frequency identification (RFID) tags in retail stores on each
item. They built system dynamics models to identify impacts of such technology in a retail
store-increased visibility of information about what is on the shelves leading to a decrease
in inventory inaccuracy, better pricing management, etc. Industry participants were able to
provide inputs on such effects to be able to build models for investment decisions.

Many software tools are now available for building system dynamics models. Such
listings are usually updated on Wikipedia and other sites. Some of the popular tools that
include academic and commercial pricings include VenSim, Vissim, and many others.
One free software, Insightmaker, appears to offer both system dynamics and agent-based
modeling capabilities in its Web version.

SECTION 10.6 REVIEW QUESTIONS

1. What is the key difference between system dynamics simulation and other simulation
types?

2. What is the purpose of a causal loop diagram?
3. How are relationships between two variables represented in a causal loop diagram?

10.7 AGENT-BASED MODELING
The term agent is derived from the concept of agency, referring to employing someone
to act on one’s behalf. A human agent represents a person and interacts with others
to accomplish a predefined task. The concept of agents goes surprisingly far back.

462 Pan IV • Prescriptive Analytics

More than 60 years ago, Vannevar Bush envisioned a machine called a memex. He imag-
ined the memex assisting humans to manage and process huge amounts of data and
information.

Agent-based modeling (ABM) is a simulatio n modeling technique to support
complex decision systems where a system or network is modeled as a set of autonomous
decision-making units called agents that individually evaluate their situation and
make decisions on the basis of a set of predefined behavior and interaction rules.
This technique is a bottom-up approach to modeling complex systems particularly
suitable for understanding evolving and dynamic systems. An ABM approach focuses
on modeling an “adaptive learning” property rather than “optimizing” nature.
Characteristics su ch as heterogeneity, rule of thumb, or optimization strategies and
adaptive learning leading to new capabilities in the system can be defined as a set of
rules and behaviors. Also, ABM is able to capture emergent phenomena that exhibit
as a result of interacting components of a system w ith each other, and influencing
each other through these interactions. These kinds of characteristics make a system
difficult to understand and predict and inherently more unstable. Flocks of birds,
social dynamics of science, and the birth and decline of disciplines (Sun, Kaur, et al.,
2013), traffic jams and crowds simulation , ant colony, financial contagion, movements
o f ancie nt societies (2005) , hou sing segregation and other urban issues (Crooks,
2010), disease propagation (Carley, Altman, et al. , 2004), and operations management
problems (Caridi and Cavalieri, 2004; Allwood and Lee, 2005, 2008) are some past
applications of ABMs. For business problems in which many interrelated factors,
irregular data, and high uncertainty and emergent behaviors exist, interactions between
agents are complex, discrete, or nonlinear, the population is heterogeneous, agents
exhibit learning and adaptive behaviors and also spatial issues, or social networks are
of interest, agent-based modeling can be used.

According to the framework developed by Macal and North (2005), to build
an agent-based model, the following steps should be taken. First of all, it should be
questioned w hat specific problem should be solved by the model, and particularly
what values agent-based modeling brings to the problem that the other problem-solving
approaches cannot bring. The second step includes identifying the agents and getting
a theory of agent behavior. What agents should be included in the model, who are the
decision makers in the system, w hich agents have behaviors? What kinds of data on
agents are available? Is it simply descriptive (static attributes)? Or does it have to be
calculated endogenously by the model and informed to the agents (dynamic attributes)?
Third, the agent relationships should be identified and a theory of agent interaction
should be taken into account. That is, the agents’ environment should be studied to
determine how the agents interact with the environment, what agent behaviors are
of interest, w hat behavior and interaction rules the agent creates and follows, w hat
decisions the agents make, and w hat behaviors or actions are being acted upon by
the agents. Next, the required agent-re lated data should be collected. Finally, the
performance of the agent-based system should be validated against reality either at the
individual agent level or the model as a w hole; particularly, the agent behaviors should
be examined.

Agent-based modeling can be implemented either using general programing
languages or through some specially designed applications that address the requirements
of agent modeling. Among agent-based platforms, SWARM (www.swarms.org),
Netlogo (http://ccl.northwestern.edu/netlogo), RePast/ Sugarscape (www.repast.
sourceforge.net), and Escape (www.metascapeabm.com) provide an appropriate
graphical user inte rface and comprehensive documentation (Railsback, Lytinen , et al. ,
2006). Application Case 10.5 describes a really useful application of agent-based modeling
to simulate effects of disease mitigatio n strategies.

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 463

Application Case 10.5
Agent-Based Simulation Helps Analyze Spread of a Pandemic Outbreak
Knowledge about the spread of a disease plays an
important role in both preparing for and respond-
ing to a pandemic outbreak. Previous models for
such analyses are mostly homogenous and make
use of simplistic assumptions about transmission
and the infection rates. These models assume that
each individual in the population is identical and
typically has the same number of potential contacts
with an infected individual in the same time period.
Also each infected individual is assumed to have the
same probability to transmit the disease. Using these
models, implementing any mitigation strategies to
vaccinate the susceptible individuals and treating
the infected individuals become extremely difficult
under limited resources.

In order to effectively choose and implement a
mitigation strategy, modeling of the disease spread
has to be done across the specific set of individuals,
which enables researchers to prioritize the selection
of individuals to be treated first and also gauge the
effectiveness of mitigation strategy.

Although nonhomogenous models for spread
of a disease can be built based on individual charac-
teristics using the interactions in a contact network,
such individual levels of infectivity and vulnerability
require complex mathematics to obtain the informa-
tion needed for such models.

Simulation techniques can be used to generate
hypothetical outcomes of disease spread by simu-
lating events on the basis of hourly, daily, or other
periods and tallying the outcomes throughout the
simulation. A nonhomogenous agent-based simula-
tion approach allows each member of the population
to be simulated individually, considering the unique
individual characteristics that affect the transmission
and infection probabilities. Furthermore, individual
behaviors that affect the type and length of contact
between individuals, and the possibility of infected
individuals recovering and becoming immune, can
also be simulated via agent-based models.

One such simulation model, built for the
Ontario Agency for Health Protection and Promotion
(OAHPP) following the global outbreak of severe
acute respiratory syndrome (SARS) in 2002-2003,
simulated the spread of disease by applying various

mitigation strategies. The simulation models each state
of an individual in each time unit, based on the indi-
vidual probabilities to transition from susceptible state
to infected stage and then to recovered state and
back to susceptible state. The simulation model also
uses an individual’s duration of contact with infected
individuals. The model also accounts for the rate of
disease transmission per time unit based on the type
of contact between individuals and for behavioral
changes of individuals in a disease progression
(being quarantined or treated or recovered). It is
flexible enough to consider several factors affecting
the mitigation strategy, such as an individual’s age,
residence, level of general interaction with other
members of population, number of individuals in each
household, distribution of households, and behav-
ioral aspects involving daily commutes, attendance at
schools, and asymptotic time period of disease.

The simulation model was tested to measure
the effectiveness of a mitigation strategy involving
an advertising campaign that urged individuals who
have symptoms of disease to stay at home rather
than commute to work or school. The model was
based on a pandemic influenza outbreak in the
greater Toronto area. Each individual agent, gener-
ated from the population, was sequentially assigned
to households. Individuals were also assigned to
different ages based on census age distribution;
all other pertinent demographic and behavioral
attributes were assigned to the individuals.

The model considered two types of contact:
close contact, which involved members of the same
household or commuters on the public transport;
and causal contact, which involved random
individuals among the same census tract. Influenza
pandemic records provided past disease transmis-
sion data, including transmission rates and contact
time for both close and causal contacts. The effect
of public transportation was simplified with an
assumption that every individual of working age
used the nearest subway line to travel. An initial
outbreak of infection was fed into the model. A total
of 1,000 such simulations was conducted.

The results from the simulation indicated that
there was a significant decrease in the levels of infected

(Continued)

464 Pan IV • Prescriptive Analytics

Application Case 10.5 (Continued)
and deceased persons as an increasing number of
infected individuals followed the mitigation strategy
of staying at home. The results were also analyzed by
answering questions that sought to verify issues such
as the impact of 20 percent of infected individuals
staying at home versus 10 percent staying at home.
The results from each of the simulation outputs were
fed into geographic information system software,
ESRI ArcGIS, and detailed shaded maps of the
greater Toronto area, showing the spread of disease
based on the average number of cumulative infected
individuals. This helped to determine the effectiveness
of a particular mitigation strategy. This agent-based
simulation model provides a w hat-if analysis tool that
can be used to compare relative outcomes of differ-
ent disease scenarios and mitigation strategies and
help in choosing the effective mitigation strategy.

QUESTIONS FOR DISCUSSION

1. What are the characteristics of an agent-based
simulation model?

Chapter Highlights
• Heuristic programming involves problem solving

using general rules or inte lligent search.
• Genetic algorithms are search techniques that

emulate the natural process of biological evolution.
They utilize three basic operations: reproduction,
crossover, and mutation.

• Reproduction is a process that creates the next-
generation population based on the performance
of different cases in the current population.

• Crossover is a process that allows elements in dif-
ferent cases to be exchanged to search for a better
solution.

• Mutation is a process that changes an e lement in
a case to search for a better solution.

Key Terms
agent-based models
causal loops
discrete event simulation
chromosome
crossover

elitism
evolutionary algorithm
genetic algorithm
heuristic programming
heuristics

2. List the various factors that were fed into the agent-
based simulation model described in the case.

3. Elaborate on the benefits of using agent-based
simulation models.

4. Besides disease prevention, in which other situa-
tions could agent-based simulation be employed?

What We Can Learn from This Application
Case
Advancements in computing technology allow
for building advanced simulation models that are
nonhomogeneous in nature and factor for many
socio-demographic and behavioral factors. These
simulation models further enhance the support for
policy decision making by hypothetically simulating
many real-time complex problem situations.

Source.- D . M. Aleman, T. G. Wibisono, and B. Schwartz,
“A Nonhomogeneous Agent-Based Simulation Approach to
Modeling the Spread of D isease in a Pandemic Outbreak ,”
Interfaces, Vol. 41, No. 3, 2011, pp. 301-315.

• Simulation is a widely used DSS approach that
involves experimentation with a model that repre-
sents the real decision-making situation.

• Simulation can deal with more complex situations
than optimization , but it does not guarantee an
optimal solution.

• There are many different simulation methods.
Some that are important for DSS include Monte
Carlo simulation , discrete event simulation, systems
dynamics modeling, and agent-based simulations.

• VIS/VIM allows a decision maker to interact
directly with a model and shows results in an
easily understood manner.

Monte Carlo simulation
mutation
reproduction
simulation
system dynamics

visual interactive
modeling (VIM)

visual interactive
simulation (VIS)

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 465

Questions for Discussion
1. Compare the effectiveness of genetic algorithms against

standard methods for problem solving, as described in
the literature . How effective are genetic algorithms?

2 . Describe the general process of simulation.
3. List some of the major advantages of simulation over

optimization and vice versa.
4. What are the advantages of using a spreadsheet package

to perform simulation studies? What are the disadvantages’

Exercises
Teradata University Network (TUN) and Other
Hands-on Exercises
1. Each group in the class should access a different online

Java-based Web simulation system (especially those sys-
tems from visual interactive simulation vendors) and run
it. Write up your experience and present it to the class.

2. Solve the knapsack problem from Section 10.3 manually,
and then solve it using Evolver. Try another code (find
one on the Web) . Finally, develop your own genetic
algorithm code in Visual Basic, C++, or Java.

3. Search online to find vendors of genetic algorithms and
investigate the business applications of their products .
What kinds of applications are most prevalent?

4. Go to palisade.com and examine the capabilities of
Evolver. Write a summary about your findings.

5. Each group should review, examine, and demonstrate
in class a different state-of-the-art DSS software product.
The specific packages depend on your instructor and

End-of-Chapter Application Case

5. Compare the methodology of simulation to Simon’s four-
phase model of decision making. Does the methodology of
simulation map directly into Simon’s model? Explain.

6. Many computer games can be considered visual simulation.
Explain why.

7. Explain why VIS is particularly helpful in implementing
recommendations derived by computers.

the group interests. You may need to download a demo
from a vendor’s Web site, depending on your instructor’s
directions. Be sure to get a running demo version, not
a slideshow. Do a half-hour in-class presentation, which
should include an explanation of why the software is
appropriate for assisting in decision making, a hands-on
demonstration of selected important capabilities of the
software, and your critical evaluation of the software. Try
to make your presentation interesting and instructive to the
whole class. The main purpose of the class presentation is
for class members to see as much state-of-the-art software
as possible, both in breadth (through the presentations
by other groups) and in depth (through the experience
you have in exploring the ins and outs of one particular
software product). Write a 5- to 10-page report on your
findings and comments regarding this software. Include
screenshots in your report. Would you recommend this
software to anyone? Why or why not?

HP Applies Management Science Modeling to Optimize Its Supply Chain and Wins a Major Award
HP’s groundbreaking use of operations research not only
enabled the high-tech giant to successfully transform its product
po1tfolio program and return $500 million to the bottom line
over a 3-year period, but it also earned HP the coveted 2009
Edelman Award from INFORMS for outstanding achievement in
operations research. “This is not the success of just one person
or one team,” said Kathy Chou, vice president of Worldwide
Commercial Sales at HP, in accepting the award on behalf
of the winning team. “It’s the success of many people across
HP who made this a reality, beginning several years ago with
mathematics and imagination and what it might do for HP.”

To put HP’s product portfolio problem into perspective,
consider these numbers: HP generates more than $135 billion
annually from customers in 170 countries by offering tens of
thousands of products supported by the largest supply chain
in the industry. You want variety? How about 2,000 laser

printers and more than 20,000 enterprise servers and
storage products? Want more? HP offers more than 8 million
configure-to-order combinations in its notebook and desktop
product line alone.

The something-for-everyone approach drives sales,
but at what cost? At what point does the price o f designing,
manufacturing, and introducing yet another new product,
feature, or option exceed the additional revenue it is likely
to generate? Just as important, what are the costs associated
with too much or too little inventory for such a product, not
to mention additional supply chain complexity, and how does
all of that impact customer satisfaction? According to Chou, HP
didn’t have good answers to any of those questions before the
Edelman award-winning work.

“While revenue grew year over year, our profits were
eroded due to unplanned operational costs ,” Chou said in

466 Part IV • Prescriptive Analytics

HP’s formal Edelman presentation. “As product variety grew,
our forecasting accuracy suffered, and we ended up with
excesses of some products and shortages of others. Our sup-
pliers suffered due to our inventory issues and product design
changes. I can personally testify to the pain our customers
experienced because of these availability challenges.” Chou
would know. In her role as VP of Worldwide Commercial
Sales, she’s “responsible and on the hook” for driving sales,
margins, and operational efficiency.

Constantly growing product variety to meet increas-
ing customer needs was the HP way- after all, the company
is nothing if not innovative-but the rising costs and inef-
ficiency associated with managing millions of products and
configurations “took their toll,” Chou said, “and we had no
idea how to solve it.”

Compounding the p roblem, Chou added , was HP’s
“organizational divide .” Marketing and sales always wanted
more-more SKUs, more features, more configurations- and
for good reason. Providing every possible product choice
was considered an obvious way to satisfy more customers
and generate more sales.

Supply chain managers, however, always wanted less.
Less to forecast, less inventory, and less complexity to manage .
“The drivers (on the supply chain side) were cost control,”
Chou said. “Supply chain wanted fast and predictable order
cycle times. With no fact-based, data-driven tools, decision
making between different parts of the organization was time-
consuming and complex due to these differing goals and
objectives.”

By 2004, HP’s average order cycle times in North
America were nearly twice that of its competition, making
it tough for the company to be competitive despite its large
variety of products. Extensive variety, once considered a
plus, had become a liability.

It was then that the Edelman prize- winning team-
drawn from various quarters both within the organization
(HP Business Groups, HP Labs, and HP Strategic Planning
and Modeling) and out (individuals from a handful of con-
sultancies and universities) and armed with operations
research thinking and methodology- went to work on the
problem. Over the next few years, the team: (1) produced
an analytically driven process for evaluating new products
for introduction , (2) created a tool for p rioritizing existing
products in a portfolio , and (3) developed an algorithm that
solves the p roblem many times faster than previous technolo-
gies, thereby advancing the theory and practice of network
op timization.

The team tackled the product variety problem from two
angles: prelaunch and postlaunch. “Before we bring a new
product, feature , or option to market, we want to evaluate
return on investment in order to drive the right investment
decisions and maximize profits,” Chou said. To do that, HP’s
Strategic Planning and Modeling Team (SPaM) developed
“complexity return on investment screening calculators” that
took into account downstream impacts across the HP product

line and supply chain that were never properly accounted
for before .

Once a product is launched , variety product manage-
ment shifts from screening to managing a product portfolio as
sales data become available. To do that, the Edelman award-
winning team developed a tool called revenue coverage
optimization (RCO) to analyze more systematically the
importance of each new feature or option in the context of
the overall portfolio .

The RCO algorithm and the complexity ROI calculators
helped HP improve its operational focus on key products,
w hile simultaneously reducing the complexity of its product
offerings for customers. For example, HP implemented
the RCO algorithm to rank its Personal Systems Group
offerings based on the interrelationship between products
and orders. It then identified the “core offering ,” w hich is
composed of the most critical products in each region. This
core offering represented about 30 percent of the ranked
p roduct portfolio . All o ther products were classified as HP’s
“extended offering. ”

Based on these findings, HP adjusted its service level
for each class of products. Core offering products are now
stocked in higher inventory levels and are made available
w ith shorter lead times, and extended offering products
are offered with longer lead times and are either stocked at
lower levels or not at all . The net result: lower costs, higher
margins, and improved customer service.

The RCO software algorithm was developed as part of
HP Labs’ “analytics” theme, which applies mathematics and
scientific methodologies to help decision making and create
better-run businesses. Analytics is one of eight major research
themes of HP Labs, which last year refocused its efforts to
address the most complex challenges facing technology
customers in the next decade.

“Smart application of analytics is becoming increasingly
important to businesses, especially in the areas of operational
efficiency, risk management, and resource planning,” says Jaap
Suermondt, d irecto r, Business Optimization Lab, HP Labs.
“The RCO algorithm is a fantastic example of an innovation
that helps drive efficiency with our businesses and our
customers.”

In accepting the Edelman Award, Chou emphasized
not only the company-wide effort in developing elegant
technical solutions to incredibly complex problems, but
also the buy-in and cooperation of managers and C-level
executives and the wisdom and insight of the award-w inning
team to engage and share their vision with those managers
and executives. “Fo r some of you w ho have not been a
part of a very large organization like HP, this might sound
strange, but it required tenacity and skill to bring about major
changes in the processes of a company of HP’s size, ” Chou
said. “In many of our business [units], project managers took
the tools and turned them into new processes and programs
that fundamentally changed the way HP manages its p roduct
po1tfolios and b ridged the organizatio nal divide.”

Chapter 10 • Modeling and Analysis: Heuristic Search Methods and Simulation 467

QUESTIONS FOR THE END-OF-CHAPTER
APPLICATION CASE

1. Describe the problem that a large company such as HP
might face in offering many product lines and options.

2. Why is there a possible conflict between marketing
and operations?

3. Summarize your understanding of the models and the
algorithms.

References
Aleman, D. M., T. G. Wibisono, and B. Schwartz. (2011).

“A Nonhomogeneous Agent-Based Simulation Approach
to Modeling the Spread of Disease in a Pandemic
Outbreak. ” Interfaces, Vol. 41, No. 3, pp. 301- 315.

Alfredo, D. M. G. , E. N. R. David, M. Cristian, and Z. V. G. Andres.
(2011, May/June). “Quantitative Methods for a New
Configuration of Territorial Units in a Chilean Government
Agency Tender Process.” Interfaces, Vol. 41, No. 3,
pp. 263-277.

Allwood, J. M., and J. H. Lee. (2005). “The Design of an
Agent for Modelling Supply Chain Network Dynamics. ”
International Journal of Production Research, Vol. 43,
No. 22, pp. 4875-4898.

Angerhofer, B. ]. , and M. C. Angelides. (2000, Winte r).
“System Dynamics Modeling in Supply Chain Management:
Research Review. ” IEEE, Simulation Conference, 2000,
Vol. 1, pp. 342- 351.

Baker, B. M., and M. A. Syechew. (2003). “A Genetic
Algorithm for the Vehicle Routing Problem.” Computers
and Operations Research , Vol. 30, No. 5, pp. 787-800.

Banks, J., and R. R. Gibson . (2009). “Seven Sins of Simulation
Practice.” INFORMS Analytics, pp. 24- 27. www.analytics-
magazine.org/summer-2009/193-strategic-problems-
modeling-the-market-space (accessed February 2013).

Bates, D. W., Cullen, D. J., Laird, N., Pete rsen , L. A. , Small,
S. D ., Se1v i, D. , … & Edmondson, A. (1995). Incidence of
adverse drug events and potential adverse drug events.
JAMA: the journal of the American Medical Association,
274(1), 29- 34.

Caridi, M., and S. Cavalie ri. (2004). “Multi-Agent Systems
in Production Planning and Control: An Ove1v iew.”
Production Planning & Control, Vol. 15, No. 2, pp.
106-118.

Carley, K. M. , et al. (2004). “BioWar: A City-Scale Multi-Agent
Network Model of Weaponized Biological Attacks. ”
http://handle.dtic.mil/100.2/ ADA459122.

Chongwatpol, J. , and R. Sharda. (2013). “RFID-Enabled Track
and Traceability in Job-Shop Scheduling Environment. ”
European Journal of Operational Research.

Classen, D., M. Pestotnik, R. Evans, J. Lloyd, and J. Burke.
(1997). “Adverse Drug Events in Hospitalized Patients:
Excessive Length of Stay, Extra Cost, and Attributable

4. Perform an online search to find more details of the
algorithms.

5. Why would there be a need for such a system in an
organization?

6. What benefits did HP derive from implementation of
the models?

Source: Adapted with permission, P. Homer, “Less Is More for HP,”
ORMS Today, Vol. 36, No. 3, June 2009, pp. 40-44.

Mortality.” The Journal of the American Medical Association,
Vol. 277, No. 4, pp. 301- 311.

Crooks, A. T. (2010). “Constructing and Implementing an
Agent-Based Model of Residential Segregation Through
Vector GIS.” International Journal of Geographical
Information Science, Vol. 24, No. 5, pp. 661-675.

Dardan, S. , et al. (2006) . “An Application of the Learning
Curve and the Nonconstant-Growth Dividend Model:
IT Investment Valuations at Intel Corporation.” Decision
Support Systems, Vol. 41, No. 4, pp. 688- 697.

Garrido, Anderson J. (2002). “Evaluation in Health
Informatics: Computer Simulation.” Comp uters in Biology
and Medicine, Vol. 32, No. 3, pp. 151-164.

Garrido, T., L. Jamieson, Y. Zhou , A. Wiesenthal, and
L. Liang. (2005). “Effect of Electronic Health Records in
Ambulatory Care : Retrospective, Serial, Cross-Sectional
study.” Information in Practice, BJM, Vol. 330, No. 7491,
pp. 1-5.

Godlewski, E., G. Lee, and K. Cooper. (2012). “System Dynamics
Transforms Fluor Project and Change Management.”
Interfaces, Vol. 42, No. 1, pp. 17-32.

Grupe , F. H. , and S.Jooste . (2004, March) . “Genetic Algorithms:
A Business Perspective.” Information Management and
Computer Security, Vol. 12, No. 3, pp. 288- 297.

Horner, P. (2009, June). “Less Is More for HP.” ORMS Today,
Vol. 36, No. 3, pp. 40-44.

Hutton, D. W. , M. L. Brandeau, and S. K. So. (2011). “Doing
Good with Good OR: Supporting Cost-Effective Hepatitis
B Interventions.” Interfaces, Vol. 41, No. 3, pp. 289- 300.

Kasiri, N., R. Sharda, and D. Asamoah. (2012, June).
“Evaluating Electronic Health Record Systems: A System
Dynamics Simulation. ” SIMULATION, Vol. 88, No. 6, pp.
639- 648.

Leape, L. , D. Bates, D. Cullen, J. Cooper, H. Demonaco,
T. Gallivan, R. Hallisey, J. Ives, N. Laird, G. Laffe l,
R. Nemeskal, L. Petersen , K. Porter, D. Se1vi, B. Shea,
S. Small , B. Sweitzer, C. M. Macal, and M. J. North . (2005).
“Tutorial on Agent-Based Modeling and Simulation.”
Proceedings of the 3 7th Conference on Winter Simulation.
Orlando , Florida, Winte r Simulation Conference, pp. 2- 15.

Marquez, A. C. , and C. Blanchar. (2006). “A Decision
Support System for Evaluating Operations Investments

468 Pan IV • Prescriptive Analytics

in High-Technology Business.” Decision Support Systems,
Vol. 41, No. 2, pp. 472-487.

Mattila, V. , K. Virtanen, and T. Raivio. (2008, May/June).
“Improving Maintenance Decision Making in the Finnish
Air Force Through Simulation.” Interfaces, Vol. 38, No. 3,
pp. 187- 201.

McGowan, ]. , C. Cusack, and E. Poon. (2008). “Formative
Evaluation: A Critical Component in EHR Implementation.”
Journal of the American Medical Informatics Association,
Vol. 15, No. 3, pp. 297-301.

Nick, Z., and P. Themis. (2001). “Web Search Using a Genetic
Algorithm.” IEEE Internet Computing, Vol. 5, No. 2.

Railsback, S., et al. (2006). “Agent-Based Simulation Platforms:
Review and Development Recommendations.” Simulation,
Vol. 82, No. 9 , pp. 609- 623.

Shin , K., and Y. Lee. (2002). “A Genetic Algorithm Application
in Bankruptcy Prediction Modeling. ” Expert Systems with
Applications, Vol. 23, No. 3.

Sidorov,J. (2006). “It Ain’t Necessarily So: The Elecu·onic Health
Record and the Unlikely Prospect of Reducing Health Care
Costs.” Health Affairs, Vol. 25, No. 4, pp. 1079-1085.

Thompson, B. , and M. Vliet. (1995). “Systems Analysis of
Adverse Drug Events .” 7be Journal of the American
Medical Association, Vol. 274, No. 1, pp. 35-43.

Thompson, D. I. , J. Osheroff, D. Classen, and D. F. Sittig.
(2007). “A Review of Methods to Estimate the Benefits of
Electronic Medical Records in Hospitals and the Need for
a National Benefits Database.” Healthcare Information
and Management Systems Society (HJMSS) , Vol. 21, No. 1,
pp. 62-68.

Walbridge, C. T. (1989, June). “Genetic Algorithms : What
Computers Can Learn from Darwin.” Technology Review
(USA), Vol. 92, No. 1.

Xiaoling, S., J. Kaur, S. Milojevic, A. Flammini, and F. Menczer.
(2013). “Social Dynamics of Science.” Scientific Reports,
Vol. 3, p. 1069.

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP