Dossier assignment

 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Each article summary must list the citation(APA format) for the article being summarized and provide 800-1000 words i which the student summarizes the main questions and conclusions of the paper and explains how the conclusions of the paper are useful to someone working in financial management of a corporation in the students current or proposed career path. The text of each summary must be 12-point Times New

Roman font, with one-inch top, bottom, left, and right margins, and each summary must begin on a new page.

Risk Management and Insurance Review

C© Risk Management and Insurance Review, 2018, Vol. 21, No. 3,

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

389

-411
DOI: 10.1111/rmir.12112

FEATURE ARTICLE

A CONCEPTUAL MODEL FOR PRICING HEALTH AND LIFE
INSURANCE USING WEARABLE TECHNOLOGY
Michael McCrea
Mark Farrell

ABSTRACT

A health risk score was created to investigate the possibility of using data
provided by wearable technology to help predict overall health and mortality,
with the ultimate goal of using this score to enhance the pricing of health
or life insurance. Subjects were categorized into low-, increased-, and high-
risk groups, and after results were adjusted for age and sex, Cox proportional
hazards analysis revealed a high level of significance when predicting mortality.
High-risk subjects were shown to have a hazard ratio of 2.1 relative to those
in the low-risk group, which can be interpreted as an equivalent increase in
age of 7.8 years. Our findings help to demonstrate the predictive capabilities
of potential new rating factors, measured via wearables, that could feasibly be
incorporated into actuarial insurance pricing models. The model also provides
an initial step for insurers to begin to consider the incorporation of continuous
wearable data into current risk models. With this in mind, an emphasis is
placed on the limitations of the study in order to highlight the areas that must
be addressed before incorporating aspects of this model within current pricing
models.

INTRODUCTION
Much like the disruptions seen in the banking industry over the past decade, emerging
technologies are revolutionizing the insurance industry. Traditional insurers are under
pressure to innovate existing business models to retain a competitive edge (Hilton,
2017). Data from CB Insights showed funding to start-ups in the newly coined InsurTech
industry has risen from $140 million in 2011 to $2.7 billion in 2016, and investment in
the sector is expected to continue to grow as new technologies arise (Catlin et al., 2017;
Jubraj et al., 2017).

The primary driver of this change has been the increasingly larger amounts of personal
data available to insurers, which offers the opportunity to predict the risk for each

Michael McCrea and Mark Farrell are in Queen’s Management School, Queens’ University
Belfast, Riddel Hall, 185 Stranmillis Rd, Belfast BT9 5EE, UK; e-mail: mmccrea11@qub.ac.uk,
mark.farrell@qub.ac.uk. The authors would like to thank Anthony Horn, whose invaluable
suggestions contributed significantly to the planning of this research.

389

390 RISK MANAGEMENT AND INSURANCE REVIEW

customer and charge them accordingly. Traditionally, in life insurance, underwriting data
come from a questionnaire and a medical examination performed by a registered nurse
or licensed physician, depending on the coverage amount and the age of the customer.
A new potential source of this information is the Internet of Things (IoT), composed of
the network of physical objects that can connect to the Internet and communicate with
one another. Examples of these include mobile phones, pacemakers, onboard computers
in cars, and of most interest to this study, wearable technology. This term, often shortened
to just wearables, describes all technology that is worn comfortably on the body or
combined with clothing (Tehrani and Michael, 2014). Common wearables include Fitbit,
Garmin fitness bands, and the Oura ring. By use of the IoT, insurers have the potential
to access huge amounts of real-time data, allowing them to build far more accurate risk
profiles concerning the people they insure. Not only can this allow a more personal and
fairer method of pricing, but through improved engagement with the customer risks
can be both managed and reduced.

The overall research aim of this article is to demonstrate the potential that data derived
from wearable devices may provide to insurance companies in terms of new rating
factors for their pricing models. As such, we develop a conceptual risk model that utilizes
data measurable by wearables, and can classify a policyholders’ relative risk to the rest of
the population. This risk model serves to highlight the potential for insurance companies
to incorporate wearable device data in their own health- and life-insurance-related
pricing models. As this is to be a preliminary model demonstrating potential and acting
as a proof of concept, simplicity will be key in order to retain the model’s generalizability.
This is the first study that attempts to create a health risk score comprising solely data
that can be collected in a continuous manner by wearable technology.

The rest of the article is outlined as follows. The “Literature Review” section investigates
the current state of the insurance industry with respect to the use of wearable technology,
and then provides a review of medical literature used in the formation of health risk
scores. The “Methodology” section describes the methods used to analyze the data.
The “Statistical Analysis—Results” section comprises the results of the analyses and
the diagnostic tests performed to confirm the validity of the findings. The “Discussion”
section discusses the possible implications of the findings, and follows with an in-depth
investigation into the limitations of the analyses performed. The article is concluded
with suggestions for possible extensions of the study.

LITERATURE REVIEW
Current State of the Insurance Industry
The insurance industry is well aware of the challenges it is likely to face over the coming
years, and so is investing heavily in research and data in order to evolve (Sultan, 2015).
Over 63 percent of insurers expect wearables to effect the industry significantly in the
next 2 years (Schwartz and Hamilton, 2015). A huge advantage of these pieces of tech-
nology is the ability to record and analyze data continuously with minimal interaction.

Already, a number of insurers have begun to incorporate wearables into their products,
trialing new innovative programs in an attempt to get ahead of their competitors and
break into markets of potential customers previously considered uninsurable. A main
area of concern for insurers is the willingness to participate in these kinds of programs.
Opt-in rates can be as low as 5 percent; however, PwC found that if the wearable was

PRICING INSURANCE USING WEARABLE TECHNOLOGY 391

provided for free, over two-thirds of customers or employees would wear the device
(Dart, 2015). With this in mind, companies such as John Hancock Financial and MLC
have provided Fitbits and smartwatches to customers for free (Becher, 2016). As the main
goal at this stage of the process is the collection of data to analyze, they have agreed to
lower premiums as an incentive for customers to release their personal data. A more
original method to collect data was used by United Health, which used a penalty rather
than awards system to motivate consumers (Dart, 2015). By requiring users to reach
fitness targets in order to avoid the purchase cost of the wearable, their system was three
times more successful in collecting data.

Other companies have gone even further and have launched programs making direct
use of the technology available and the data they receive. One of the first insurers to use
wearables in their “Vitality” product was South African company Discovery (Abraham,
2016). Vitality provides rewards such as discounted travel or accommodation if certain
activity levels are met. This is profitable as when policyholders become healthier, the
expected cost of the risk pool is lowered. Similar programs are offered by other insurers
such as MLC in Australia, who provide premium discounts when healthy behaviors are
displayed. These products are also being marketed toward large businesses that pur-
chase insurance, as healthier employees have on average greater levels of productivity
(Abraham, 2016). Existing wearables used for these sorts of programs include the Fitbit1

and the Apple Watch.2 These products often contain accelerometers to detect move-
ment/sleep/heart rate; data can usually be accessed remotely via smartphone with the
relevant GPS tracking data.

The usage of real-time data from wearables draws parallels with insurance telematics
programs, which have increasingly gained market share in the automotive industry
(Wahlström et al., 2015). GPS can measure metrics such as mileage, speeding, and lo-
cation, whereas an accelerometer known as the “Black Box” can record instances of
hard breaking, sharp turns, and sudden acceleration (Iqbal and Lim, 2006). Analysis
of these data can help build a more personalized and accurate estimate of the level of
risk the policyholder places on the insurer. In addition to this, drivers tend to improve
driving method when monitored by telematic devices in order to lower their premiums
(Azzopardi and Cortis, 2013); thus, the policies can encourage safer driving behaviors,
lowering the expected cost of the risk pool. If wearables can follow automotive telemat-
ics and gain a foothold in the insurance industry, they have the potential to become an
integral part of many health and life insurance policies.

Health Risk Scores in the Literature
The key question for research is how insurers can transform wearable technology’s raw
data into meaningful information that could be used to price their products. Without
being able to find a quantifiable link between the measurements and the health of an
individual, the data have no value (Abraham, 2016). One possible option to achieve this
could be using these data to create a health risk score. The concept of summarizing a
patient’s data into a single score is not new in academia. The Framingham Risk Score
(Wilson et al., 1998) is used worldwide as an estimate of cardiovascular risk, and the

1 www.fitbit.com
2 www.apple.com/uk/watch/

392 RISK MANAGEMENT AND INSURANCE REVIEW

probability of onset of Type 2 diabetes is typically predicted by the Diabetes Risk Score
(Lindström and Tuomilehto, 2003).

Although the risk of a specific health condition can be modeled to a reasonable level of
accuracy using known causal variables, the formation of a health score using simple met-
rics to quantify an individual’s overall health and attempt to predict all-cause mortality
is a more challenging task. Due to their known strong association with mortality, certain
factors such as smoking, alcohol consumption, diet, and physical activity are prevalent
in the majority of studies (Knoops et al., 2004; van Dam et al., 2008; Khaw et al., 2008;
Gopinath et al., 2010; Kvaavik et al., 2010; Nechuta et al., 2010; van den Brandt, 2011;
Hamer et al., 2011; Ding et al., 2015). By creating health risk scores whereby each good
(bad) behavior is assigned a point, higher scores were consistently associated with an
increased (decreased) risk of mortality. Some studies went further and considered the
individual risk combinations and the possibility of synergistic relationships between
the factors (Ding et al., 2015), with smoking and excess alcohol consumption having
substantially more effect on mortality when combined. A key objective of many studies
was to attempt to discover new factors to incorporate into their risk scores. Nechuta et al.
(2010) find waist-hip ratio may be an even stronger predictor of mortality than body
mass index (BMI), and include both of these factors in their health risk scores. Ding et al.
(2015) incorporate metrics to measure a sedentary lifestyle, finding that both prolonged
sitting and unhealthy sleep duration could be used in combination with physical activity
levels in a health score.

Methods used to classify diets also vary considerably between studies. The most com-
mon way this is performed is by summing up the quantity of fruits and vegetables eaten
and using this as a proxy for a healthy diet, but researchers have endeavored to improve
this simple method by including a range of different foods eaten (van den Brandt, 2011).
A further step was performed by Khaw et al. (2008) who use blood plasma vitamin
C concentrations as a proxy, allowing a measured value to be reported rather than a
potentially biased and inaccurate self-reported value.

An interesting idea can be drawn from Glei et al. (2014) and Gruenewald et al. (2006),
who discuss the notion of using particular biomarkers to represent the functionality of
different biological systems. Gruenewald et al. (2006) give suggestions of biomarkers to
act as proxies for neurological function, immune activity, cardiovascular function, and
metabolic activity. From the perspective of creating a health risk score, ensuring chosen
metrics can represent the functionality of all major biological systems could be a route
to creating a more complete picture of overall health.

Using walking activity as a metric to predict mortality has had success in both young and
elderly populations (Tudor-Locke et al., 2011). Walking is a particularly useful metric;
while improving cardiovascular or respiratory health, it can also suggest a conscious
decision to lead a healthy lifestyle when done for pleasure. Furthermore, a lack of
walking can also be indicative of underlying chronic conditions. Simple measures of
walking associated with mortality include distance walking per day (Hakim et al., 1998)
or, equivalently, the average number of steps each day (Tudor-Locke et al., 2011). Ganna
and Ingelsson (2015) find self-reported walking pace to be one of the strongest lifestyle
predictors of mortality, greater even than smoking habits. This measurement could be

PRICING INSURANCE USING WEARABLE TECHNOLOGY 393

recorded by wearables using a combination of GPS data and accelerometers with the
ability to distinguish between walking, running, and other types of movement.

Due to the failure of the cardiovascular system being responsible for a large proportion of
deaths, it is only natural that many studies have focused on finding ways to measure this
risk. Elevated resting heart rate has been shown by many studies to be an independent
predictor of both cardiovascular and all-cause mortality (Seccareccia et al., 2001; Jensen
et al., 2013; Zhang et al., 2015). This is not the only possible metric however; blood
pressure is shown to be strongly associated with the occurrence of a stroke, and is also
highly correlated to all-cause mortality (Georgakis et al., 2017). As technology progresses
heart rate variability, which is typically measured by ECG, will likely become measurable
on a continuous basis and shows much promise in being used to predict heart failure
(Lucena et al., 2016).

Ding et al. (2015) incorporate sleep duration into their health risk score, yet this is only
one way to measure sleep. Wong et al. (2012) find that in addition to duration, sleep
quality also had a marked effect on physical well-being on a sample of Chinese students.
A connection between poor sleeping patterns and the risk of onset of Type 2 diabetes is
found by Cappuccio et al. (2010a), which as a long-term illness can be very expensive as
an insurer.

The list of metrics discussed here is not exhaustive, but merely an indication of how
much potential exists for this academic area to be developed. Subject to availability
of data, the “Assessment of Health Metrics” section attempts to utilize these possible
metrics to create a wearable-focused score.

METHODOLOGY
Study Population
This analysis is based on data from participants of the Health and Lifestyle Survey
(HALS) (Cox, 1988), with the target population defined as individuals 18 years and over
in England, Wales, and Scotland. The methods and rationale of this study have been
reported elsewhere (Cox et al., 1987). In brief, 12,672 addresses were selected randomly
from electoral registers, yielding 12,254 suitable households, from which of each one
person was randomly chosen. A response rate of 73 percent generated 9,003 in-person
interviews, with 82 percent (7,414) agreeing to a further visit from a study nurse to carry
out various health measurements. Comparison with the 1981 census showed that the
sample was representative of the adult British population (Blaxter, 1987). The current
status of the participants (alive or deceased) as of June 2009 was provided by the UK
National Health Service (NHS) Central Registry.

Relevant areas of the interviews included lifestyle habits such as alcohol consumption,
smoking, physical activity, and sleep duration. Information about previous diagnoses
and health history were also recorded at this time. Height, weight, blood pressure, and
resting heart rate were measured by a study nurse in the follow-up visit.

Assessment of Health Metrics
The chosen health metrics were included for a number of reasons, but the most impor-
tant factor was the ability to effectively quantify the information provided by the HALS
to separate subjects into healthy and unhealthy groups. The commonly used metrics

394 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE 1
Health Metric Classifications

Health Metric Variable Point Awarded Percentage

Alcohol consumption Al Intake of > 14 units of alcohol per week 18.3%

Smoking Sm Current smoker 36.1%

BMI Bmi ≥ 30 kg/m2 (obese) 9.5%
Physical activity Phy ≤ 120 minutes/week leisure time exercise 76.0%
Sleep duration Sd Sleeping < 7 or > 9 hours/day 39.8%

Blood pressure Bp Hypertensive reading 8.6%

Resting heart rate Rhr Rate ≥ 90 bpm 4.7%
Walking duration Wd Walking < 20 minutes/day 18.70%

(alcohol consumption, smoking, and BMI) are health metrics that have been shown in
previous literature to have an effect on mortality risk (Khaw et al., 2008; Kuh et al.,
2009; Hamer et al., 2011). The metrics measurable by wearables were chosen due to
the existence of wearables that can currently record these metrics to a certain level of
accuracy. Exercise activity can be detected and distinguished from walking or regular
activity through movement sensors and heart rate increases (Comstock, 2015). Sleeping
activity can be tracked to various degrees of accuracy through a multitude of devices,
including most mobile phones via applications (Mann, 2017). Blood pressure measure-
ment has only begun to enter the wearables market recently; however, as the technology
is developed it is likely to become more widespread and available in mainstream de-
vices (Redlitz, 2017). Heart rate is one of the most common health metrics, available in
virtually all mainstream fitness wearables including Fitbit, Jawbone, and Apple Watch.
Similarly, almost all devices have some form of pedometer measuring steps, walking
duration, pace, and distance walked when combined with GPS.

Poor health metrics were classified as shown in Table 1 using results from previous
literature and official bodies (World Health Organization, 1995; World Health Organi-
zation and International Society of Hypertension Writing Group, 2003; Leitzmann et al.,
2007; Cappuccio et al., 2010b; Kvaavik et al., 2010; Jensen et al., 2013; Department of
Health, 2016). The effect of these metrics on subject survival time is assessed by the Cox
proportional hazards model.3

STATISTICAL ANALYSIS—RESULTS
Of the 7,414 participants who agreed to a visit from the study nurse, 291 (3.9 percent)
had incomplete measurements and had to be excluded from the analysis. A further 238
(3.3 percent) were unable to be categorized in the June 2009 survey due to reasons such
as having departed overseas, no longer being registered on the NHS, or simply being
unable to be contacted. This left n = 6,885 suitable subjects for the following analyses,
out of which 2,160 (31.4 percent) died prior to June 1, 2009. The principle outcome in this

3 For further information on the Cox model, see Cox (1972).

PRICING INSURANCE USING WEARABLE TECHNOLOGY 395

TABLE 2
Results of Individual Cox Regressions

Health Metric Deaths Coefficient p-Value HR (95% CI)

Alcohol consumption 25.9% 0.13 0.04 1.14 (1.01–1.29)

Smoking 33.7% 0.53 0.00 1.69 (1.55–1.85)

BMI 43.4% 0.28 0.00 1.32 (1.17–1.50)

Physical activity 36.4% 0.33 0.00 1.39 (1.22–1.59)

Sleep duration 39.6% 0.15 0.00 1.16 (1.06–1.26)

Blood pressure 68.1% 0.23 0.00 1.26 (1.13–1.41)

Resting heart rate 45.3% 0.47 0.00 1.60 (1.36–1.90)

Walking duration 40.7% 0.22 0.00 1.25 (1.13–1.38)

study was survival time, measured as the time in years between collection of baseline
data until death or date of censorship (June 1, 2009). All statistical tests and analyses
were performed using Stata 14.1 (StataCorp, 2015).

A log-rank test was performed to assess the Kaplan–Meier (survival) functions of males
and females, with the null hypothesis for the test assuming the estimates for the two
sexes are equal. A p-value of 0.00 indicated this was not the case. Accordingly, all Cox
proportional hazards regression models were adjusted for sex and age.

Individual Health Metrics
Adjusted Cox regressions were run for all eight metrics individually, with results dis-
played in Table 2. The p-values in the table report the significance of the coefficients
using the Wald test statistic, where the null hypothesis assumes βi = 0. We can see that
all the variables were statistically significant at the 99.5 percent level of confidence ex-
cept for alcohol consumption, which was significant at the 96 percent level. The hazard
ratios show smoking raised the probability of death to the greatest extent, whereas al-
cohol consumption and abnormal sleep duration had the least effect. It is important to
note the nonlinear relationship between the percentage of deaths of those with a poor
health metric and the corresponding hazard rate. A total of 68.1 percent of those with
high blood pressure died in the study versus only 33.7 percent of smokers; however,
the hazard ratio of smokers was much higher. Adjusting for age or taking into account
the survival times after measurement could have marked effects on the coefficients of
our model. For reference, the hazard ratios for smoking and high blood pressure after
removing the adjustment for age were 1.11 (from 1.69) and 3.37 (from 1.26), respectively.
This could suggest high blood pressure may be in part caused by age, be more likely
to cause death when the subject is at an older age, or be related to age in another way
entirely.

Combined Health Metrics
For variables to be analyzed in a combined health metrics discussed below, a point of 1
was given to each poor health metric classification and 0 otherwise, with classifications
defined as in the “Assessment of Health Metrics” section.

396 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE 3
Distribution of Health Score I

Points Frequency Percentage of Total Deaths Percentage Died

0 573 8.3% 54 9.4%

1 1,949 28.3% 355 18.2%

2 2,378 34.5% 797 33.5%

3 1,428 20.7% 646 45.2%

4 475 6.9% 250 52.6%

5 73 1.1% 51 69.9%

6 9 0.1% 7 77.8%

A Cox regression was then run on all the variables at once. There were no major differ-
ences between this and the individual regressions, except that the variable for alcohol
consumption was no longer significant. Further investigation showed that this was
mainly due to collinearity between alcohol consumption and another explanatory vari-
able: smoking. This would be expected considering 51 percent of those with a poor health
classification for alcohol consumption were current smokers, whereas only 26 percent
of smokers were considered to have poor drinking behavior; thus, much of alcohol’s
effects would likely be incorporated into the smoking variable. This lack of significance
of an alcohol variable has been seen in similar studies on different populations such as
that by Ding et al. (2015), who note that while it may not show significance by itself,
when in combination with other metrics such as smoking or physical inactivity it can
have a strong association with all-cause mortality. There is also a general consensus
of alcohol and mortality having a U-shaped relationship, in which both drinkers and
nondrinkers have an increased risk (Khaw et al., 2008). The model showed a high level
of significance overall with χ 2(10) for the log-rank test giving a p-value of 0.00. We can
also see the significance of the wearable-related health metrics when combined with the
more commonly used metrics. This suggests that there may be some benefit to a model
consisting of measurements made by wearable technology.

Health Score I. We formulate Health Score I, composed of all significant variables in
the previous analyses. The health score was created by summing the points for each
individual subject, giving a possible range of 0 to 7 points. A lower score was indicative
of a healthier lifestyle, and thus it was hypothesized that survival probability would
decrease with the increase of poor health metrics. Health Score I can be summarized as

Health Score I = Sm + Bmi + P h y + Sd + B p + Rhr + Wd, (1)

with variables defined in Table 1. A Cox regression was run on Health Score I, assessing
both the overall explanatory power of the score and its effectiveness across its range.

The distribution of Score I is shown in Table 3. We can see that no subjects achieved the
maximum points tally of 7, and only nine subjects had 6 points, which could affect the

PRICING INSURANCE USING WEARABLE TECHNOLOGY 397

TABLE 4
Results of Cox Regression for Health Score I

Variable Coefficient p-Value HR (95% CI)

Score I 0.27 0.00 1.31 (1.26–1.36)

0 1 (Reference)

1 0.02 0.88 1.02 (0.77–1.36)

2 0.51 0.00 1.66 (1.25–2.19)

3 0.74 0.00 2.10 (1.59–2.78)

4 0.86 0.00 2.37 (1.76–3.19)

5 1.31 0.00 3.69 (2.51–5.42)

6 1.10 0.01 3.00 (1.36–6.61)

significance of the regressions at this value. The final column shows that as the number
of points increases, the percentage of deaths for each total number of points are strictly
increasing as hypothesized.

Running an adjusted Cox regression showed that Health Score I was able to predict
survival time to a high level of significance, with p-value 0.00. The first row of Table 4
tells us that for an increase of 1 point, a subject has on average a 31 percent higher chance
of dying during the next year. In a stratified analysis of the score, we can see that there
is little evidence to suggest that the presence of one poor metric had any effect on
the hazard ratio of a participant. After this, each additional poor metric shows an increase
in predicted hazard ratio until a total of 6 is reached; however, this is likely due to the
small sample size for this score value, which is apparent on inspection of the wide
range seen in the 95 percent confidence intervals (CIs). The CIs for adjacent point totals
also overlap, which may suggest that an increase of just 1 point in Score I may not be
statistically significant or indicative that too many metrics are being used.

Health Score II. Here, we create an alternative health score that consisted of only the
health metrics deemed viable to be measured by wearables, as seen in the “Assessment
of Health Metrics” section. Health Score II was defined by

Health Score II = P h y + Sd + B p + Rhr + Wd (2)

and was analyzed in a similar manner to Health Score I.

Again the maximum points total, in this case 5, had a very low frequency of subjects and
so was unlikely to be able to provide a strong level of significance of increased mortality
above those with 4 points. This can be viewed in Table 5. As before, the percentage of
deaths for each total number of points is increasing, with the difference between each
level more distinct.

The Cox regression for this score illustrated a strong ability to predict relative survival
time, producing a p-value of 0.00 as seen in Table6. Score II shows an expected 25 percent
increase of death in the next year for each increase of 1 point. The presence of one single

398 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE 5
Distribution of Health Score II

Points Frequency Percentage of Total Deaths Percentage Died

0 871 12.7% 93 10.7%

1 2,820 41.0% 647 22.9%

2 2,345 34.1% 927 39.5%

3 740 10.8% 415 56.1%

4 104 1.5% 73 70.2%

5 5 0.1% 5 100.0%

TABLE 6
Results of Cox Regression for Health Score II

Variable Coefficient p-Value HR (95% CI)

Score II 0.23 0.00 1.25 (1.19–1.31)

0 1 (Reference)

1 0.19 0.08 1.21 (0.97–1.51)

2 0.46 0.00 1.58 (1.27–1.96)

3 0.64 0.00 1.88 (1.50–2.38)

4 0.92 0.00 2.50 (1.83–3.41)

5 0.90 0.05 2.45 (0.99–6.05)

poor metric was again not statistically significant at the 95 percent level; however, with
a p-value of 0.08 and a hazard ratio of 1.25, there appears to be an indication of one
poor metric having some effect on survival time. When stratifying the score, again we
see the hazard ratio increases as we increase the number of points until we reach a
total of 5. Once more, this is likely due to the sample size of five subjects (0.1 percent)
above anything else. Overlapping CIs are seen again with the individual points totals,
suggesting that there is still room to improve the model.

Health Score III. We further refine Health Score II in order to achieve nonoverlapping
CIs, resulting in the creation of Health Score III, a final model that categorized subjects as
low, increased, and high risk. This score categorized subjects into three groups as shown
in Table 7. The aim of Health Score III was to summarize participants into distinct groups
without any overlapping of 95 percent CIs.

Adjusted Cox regressions were run on Score III with results displayed in Table 8. Similar
to Score II, the model is statistically significant, with a greater hazard ratio as would be
expected due to the presence of more poor metrics between each neighboring value.

In the stratified analysis, we can also see a score of 1 is no longer insignificant, a major
flaw present in the previous models. In addition, we have now managed to successfully

PRICING INSURANCE USING WEARABLE TECHNOLOGY 399

TABLE 7
Classification of Health Score III

Points in Score III Risk

Score II Variable Classification

0 0

1 0 Low risk

2 1

3 1 Increased risk

4 2

5 2 High risk

TABLE 8
Results of Cox Regression for Health Score III

Variable Coefficient p-Value HR (95% CI)

Score III 0.35 0.00 1.42 (1.31–1.54)

Age 0.10 0.00 1.10 (1.10–1.11)

Sex 0.49 0.00 1.63 (1.49–1.77)

0 1 (Reference)

1 0.34 0.00 1.40 (1.27–1.53)

2 0.74 0.00 2.10 (1.66–2.65)

remove the overlapping of 95 percent CIs with adjacent scores. We can interpret this as
that there is over a 95 percent probability that an arbitrary subject will be classified in
the survival category that describes their survival function best.

We can now write an equation for the hazard function λ(t) of a subject, with respect to
vector Xi consisting of Health Score III, age, and sex as

λ(t|Xi ) = λ0(t)exp[0.35(Score IIIi ) + 0.10(a gei ) + 0.49(se xi )], (3)

where λ0(t) denotes the baseline hazard function.

The age and sex variables included are as a result of the model adjustment. Our Cox
model suggests a hazard ratio of exp[0.10] = 1.11 for an increase in age of 1 year, so the
probability of death for an arbitrary subject increase by approximately 11 percent each
year. A value of 0 for the variable sex denoted a female, and 1 a male; thus, the model
predicts that all else being equal, the hazard rate of a male subject will be 63 percent
higher than that of a female.

A graphical estimate of the baseline hazard function (low risk) was calculated to
complement the model, along with estimates for the other categories. The baseline
hazard term λ0(t) cannot be represented as a function in (3); however, Stata can use

400 RISK MANAGEMENT AND INSURANCE REVIEW

FIGURE 1
Estimated Hazard Functions [Color figure can be viewed at wileyonlinelibrary.com]

0
0.

00
5

0.
01

0.
01

5
0.

02
0.

02
5

Sm
oo

th
ed

h
az

ar
d

fu
nc

ti
on

0 5 10 15 20 25

t
Score III = 0 Score III = 1
Score III = 2

standard kernel-smoothing methodology (Gray, 1990) to approximate a smooth curve
to the Nelson–Aalen4 estimate, thus making it differentiable. The resulting estimated
baseline hazard rate can be viewed in Figure 1. It is represented by the blue line in the
plot, for which Health Score III is 0. Hazard rates for scores of 1 and 2 are also plotted
for reference. As expected, the hazard rate increases exponentially over time, noticeable
through the slight convexity of the functions. The sharp drop at the end is caused by the
censoring of subjects after differing lengths of time under observation, resulting from
different measurement dates, and has no bearing on the true hazard function.

This graph also illustrates the consequence of the proportional-hazards assumption. It
is clear that the smoothed hazard functions are proportional and would be parallel if
scaled logarithmically.5

Diagnostics
If we are to discuss the Cox model with Health Score III, and its ability to be used in
the life insurance market, we must first run several diagnostic tests. Although the Cox
model is semiparametric, it still must be checked for misspecification, goodness of fit,
outliers, and influential points.

The simple yet powerful link test was run as a general specification test for the model
(Cleves et al., 2010), with no evidence of misspecification uncovered. When Schoenfeld
(1982) residuals were analyzed graphically to check the proportional hazards assump-
tion, no violation of the assumption was apparent. The assumption was further sup-
ported using a log–log plot for Health Score III, with curves appearing roughly parallel
as expected.

4 See Nelson (1972) and Aalen (1978) for further information.
5 With a slight allowance due to the kernel-smoothing process.

PRICING INSURANCE USING WEARABLE TECHNOLOGY 401

FIGURE 2
Comparison of Kaplan–Meier (Observed) and Cox (Predicted) Survival Functions [Color
figure can be viewed at wileyonlinelibrary.com]

0.
20

0.
40

0.
60

0.
80

1.
00

Su
rv

iv
al

p
ro

ba
bi

lit
y

0 5 10 15 20 25
t

Observed: III = 0
Observed: III = 1
Observed: III = 2

Predicted: III = 0
Predicted: III = 1
Predicted: III = 2

Model agnostic observed Kaplan–Meier curves (Kaplan and Meier, 1958) were plotted
alongside the predicted survival functions for each score produced by our Cox model to
observe how they compared to the data. This is shown in Figure 2, with the Cox model
appearing to be an excellent fit for estimating survival probability of subjects who
scored 0 or 1 in Score III. There is a slight deviation between predicted and observed
for those who scored 3. We would hope that this is due to the smaller sample size of
this category, and perhaps due to the presence of a few outliers, rather than a deviation
from proportionality. Ideally, if the sample size were to approach infinity, the observed
Kaplan–Meier curves would become indistinguishable from those predicted by the Cox
regression.

Cox–Snell residuals (Cox and Snell, 1968) examining the overall fit of the model showed
little divergence between observed and predicted values. The predictive power of the
model was evaluated with Harrell’s C concordance statistic (Harrell et al., 1982), which
indicated the model correctly predicting the order of survival times in 86 percent of
instances.

Martingale residuals were calculated, with no evidence in the residuals to suggest that
there were covariates with incorrect functional form. Deviance residuals6 and dfbetas
were used to determine the influence that outliers exerted on the Score III with no
individuals subjects considered highly influential (Belsley et al., 2005).

DISCUSSION
The model created in this study demonstrates that insurance rating factors that can
feasibly be measured by wearable devices have predictive power in relation to all-cause
mortality.

6 For further information on deviance residuals, see Therneau et al. (1990).

402 RISK MANAGEMENT AND INSURANCE REVIEW

Benefits of Model
A key benefit of Health Score III is its simplicity of being categorized into low, increased,
and high risk. The simplicity should help facilitate the possible inclusion of a similar
model within current developed pricing models. Furthermore, using simple logarithm
rules and the data in Table 8, we can interpret the scores in terms of years added on to
the age of the insured. For each additional point added in Health Score III, an increase
of 3.7 years above the subject’s true age would be equivalent. Using the hazard rates,
we can also say that being considered high risk is equivalent to a low-risk subject being
7.8 years older. The difference in mortality risk due to being male can be quantified as
in increase in age of 5.1 years, which is slightly higher than the average difference in
life expectancy of males and females of 3.7 years (Office for National Statistics, 2016).
The effect on life expectancy would likely be greater at younger ages as well, due to the
higher expected survival times in that population. Being able to transform the health
scores into an equivalent increase in age could drastically simplify the process needed
to integrate them into existing models.

Data were available in the HALS for other factors that could have been used to adjust
the analyses, such as household income or socioeconomic status; however, in the interest
of simplicity they were not included. By only adjusting for age and sex, the predictive
ability of the model could be retained, and the goal was not to address causality (Ding
et al., 2015). Simplicity reasons were also used to justify the lack of weighting variables;
however, this imprecision would only be expected to reduce the significance of the score,
and strong significance was still present.

Our model acts as a simple proof of concept to help demonstrate, in a rudimentary
sense, that insurance companies may be able to utilize data from wearables as part of
their premium rating process. Thus, we extend our discussion to the benefits of using
wearable-derived data, in insurance pricing, from a general point of view.

General Benefits of Using Wearables Data in Insurance Pricing Models
The recent proliferation of wearable devices, and the resulting explosion in personal
self-quantified health data, has opened up the potential for new rating factors to be
included as part of current life and health insurance pricing models.

There are numerous benefits regarding the use of a wearables model. First, by recording
data on individuals health behavior (e.g., biometric self-quantification data collected via
wearable device technology), the information asymmetry between the policyholder and
the insurer is reduced, thus enabling an enhanced granular risk differentiation based on
the true risk levels of the drivers to be achieved. This potentially reduces the problems
of adverse selection, allowing the insurer to price individuals at a more personalized
and accurate level, which should result in a more stable cohort of policyholders who
are fairly priced. It should be noted however that there is a concern that being under
obligation to provide personal data may penalize uninsurables (Yates, 2017); however,
it could be argued that those who do not attempt to remain healthy penalize low-risk
policyholders due to the information asymmetry between insurer and consumer, and
the inevitable adverse selection that comes with it (Gatzert and Wesker, 2014). Although
more individualized pricing may open insurance cover to previously uninsurable risks
(e.g., diabetics who very carefully manage their diet and exercise regimes), it is important

PRICING INSURANCE USING WEARABLE TECHNOLOGY 403

to also consider that some restrictions on risk classification and hence an acceptable level
of adverse selection can increase loss coverage and so make insurance work better for
society as a whole (Thomas, 2008).

A further significant is the potential reduction in underwriting costs borne by the in-
surer and consequently the policyholder. In younger and healthier age groups, costs
from frequent medical examinations can actually exceed expected value of claims over
the same period (Pitacco, 2014). If used in combination with other nonwearable met-
rics that require measurement, the select period required between examinations could
be increased due to the reduced risk. The presence of more information throughout the
life of the policyholder, due to the ideally continuous nature of the model, will reduce the
variability of costs from their expected level (Pitacco, 2014). Although this could not be
achieved in this article’s model, the use of a continuous data set would not require much
modification, as at this point the model coefficients are assumed to remain constant over
time and only the covariates can change.

One area insurance companies have always struggled is in customer engagement, with
customers considering insurance policies more of an obligation, or a “grudge purchase,”
rather than a product. The ability to self-monitor could be an incentive to increase good
behaviors in individuals due to the increased engagement with their own health data
through mobile devices (Abraham, 2016). Policies can also be tailored to individuals’
specific needs. These factors will be important to retain interest, as up to 50 percent of
customers become disinterested and stop using their wearables within 1 year of purchase
(Gore, 2015). Thus, it could be argued that wearables may play a future role in enhancing
the customer relationship, possibly even to the extent that insurance companies begin
to play a greater role in the preclaim period, by incentivizing healthy behaviors. This
has many implications including the potential for insurers to help with earlier identi-
fication of conditions such as diabetes and heart disease, chronic disease management,
and improving the obesity epidemic through financial incentives and encouragement.
Clearly, there is a potential for enhancing the policyholder relationship as well as also
ultimately bringing about interventions that ultimately lead to a reduction in claims.
Wearables may also bring about a more fluid and continuous relationship between the
insurer and policyholder. Historically, there was little to no interaction between the two
parties between point of sale and claim or renewal. Data provided on a more continu-
ous basis with potential ensuing rewards (e.g., monthly premium discounts based on
activity levels) provide the opportunity for greater “touch points” in the relationship
and thus may lead to lower churn rates.

Although there is great potential for insurance companies to incorporate wearables
into their insurance products, there are many hurdles yet to overcome. Of paramount
importance are the issues of fraud detection and the questionable accuracy of many
devices/metrics. At present, some wearable data are open to fraudulent reporting as
individuals may be able to record data that are not indicative of their own behavior.
As can be easily imagined, this is particularly problematic in relation to metrics such
as number of steps taken. Device accuracy also represents another problem. Certain
metrics are currently measured consistently and accurately via wearables, whereas other
metrics show a large discrepancy. For example, a 2017 Stanford study found that energy
expenditure readings were very inaccurate, whereas in contrast, heart rate metrics were

404 RISK MANAGEMENT AND INSURANCE REVIEW

found to be within 5 percent of the true value for most devices (Shcherbina et al.,
2017).

LIMITATIONS
Despite the numerous diagnostic tests used to validate the model, the findings in this
article must be interpreted in the light of the study’s limitations.

Measurement Limitations
Despite the large sample size in the HALS, certain categories investigated were small,
such as the high-risk category for Health Score III, and to an even greater extent the
higher points totals for Health Scores I and II. An increased number of subjects in these
categories would allow for a more accurate calculation of model coefficients, hopefully
narrowing their 95 percent CIs to a more acceptable level.

It is also worth considering the possibility that the selection of subjects analyzed them-
selves were biased. As there was a response rate of 73 percent in the survey, nonpar-
ticipation bias might have affected the prevalence of associations, which would impact
their generalizability; however, due to the multivariable nature of the model, the health
scores would not be affected to the same degree. In addition to this, Galea and Tracy
(2007) find that lower participation rates are unlikely to have a substantial effect on
exposure event associations. This suggests that associations, relative to prevalence, are
less reliant on sample representativeness.

A particularly significant shortfall in the HALS was the length of the follow-up period
spanning only 25 years. This is simply due to the date of the initial data collection, and
so the data set will improve in time; however, it was speculated that the model may
only be relevant for certain segments of the population. For example, a 30-year old with
high blood pressure who exercises infrequently is still unlikely to die in the next 25
years, whereas an 80-year old is likely to die over the same period regardless of the
underlying health metrics they possess. Thus, the effect of possessing these metrics is
hidden from our data set. In order to investigate this, the sample was stratified into age
groups spanning 10 years starting from 30 years old. Log-rank tests were performed
for Score III within each of the age groups. The results showed that the model is only
successful in predicting survival times between ages 40 and 80. This means that there
may not be optimal data for 44.7 percent of the participants in our data set. In fact,
these 44.7 percent of participants only accounted for 13.1 percent of deaths. While 100
percent of those above 80 years old died in the follow-up period, there were deaths
in only 4.0 percent of those under 40 years old. With 96.0 percent of subjects under 40
being censored in June 2009, it would be very difficult to find significant results for that
population due to a very small proportion providing an exact survival time.

A final area identified in the HALS was the possibility for measurement error to take
place. Although several metrics were taken by a study nurse, others such as physical
activity, walking, and sleeping duration were not subject to the same level of scrutiny.
When variables are self-reported, they are almost always subject to misclassification
bias (Maudsley and Williams, 1996). Physical activity and walking were estimated by
calculating the average amount achieved over the previous fortnight; however, for many
participants these 2 weeks may not have been representative of their lifestyle as a whole.
Something as simple as bad weather in a region could have impacted reported walking

PRICING INSURANCE USING WEARABLE TECHNOLOGY 405

levels for its subjects. The survey question on average sleep duration was little more
than a best guess by participants, and an estimate over the long term would likely be
difficult for most to answer accurately. There is also the possibility that this bias could
be nonrandom: on average, subjects tend to report favorable behaviors due to social
desirability bias. Fortunately, the nature of this study means that this bias will more
often be toward the null (Ford et al., 2011).

Methodological Limitations
While the HALS had its limitations, the design of the model itself had its own short-
comings. Poor indicators for different metrics may have the same underlying cause, and
thus the presence of a second poor metric may exaggerate the mortality risk due to the
additive nature of the health score. This is particularly relevant as our model considers
the metrics to be indicators of risk rather than causes. Confounding could also be in
effect, which is defined as when an included variable is correlated to both the dependent
and an independent variable. Walking duration is a prime example of this, as it was
shown with physical activity in the “Combined Health Metrics” section to be individ-
ually significant in the same model and also likely to be related to one another. On the
other hand, studies have shown that associations of particular pairs of metrics with
mortality can be much higher than the sum of their individual associations as measured
by hazard ratios (Ding et al., 2015), and so the inclusion of a score multiplier could be a
useful addition in these cases. A further modification that could be made to improve the
modeling of multiple metrics would be the inclusion of weighting factors to represent
their effects on hazard ratios. In the Cox regression performed in the “Combined Health
Metrics” section, resting heart rate had a much higher hazard ratio (1.45) relative to
high blood pressure (1.17). Allowing a greater weighting to resting heart rate in this
scenario might increase the predictability of our health scores. As mentioned before,
simplicity was a key aspect of the model and the capturing of these kinds of interactions
were not a primary concern when there was already a strong relationship with mortality
present.

A common step taken in epidemiology studies is to exclude any subjects with previous
diagnoses or chronic diseases such as cancer, heart disease, or stroke. This condition
could effect not just survival time, but whether a subject presents poor health metrics
or not. However, as this study was not investigating causation, only indicators of poor
health, it was opted to leave these subjects in the main analyses. Poor metrics associated
with these conditions was something we hoped to capture, as from an insurance per-
spective our score must be based on the likelihood of a claim for death or illness being.
The presence of previous conditions affecting results in this way is known as reverse
causality. In the interest of thoroughness, a separate Cox regression was run excluding
all those who possessed these conditions at measurement or died within the first 2 years
of follow-up as per the methods of Ding et al. (2015) with results displayed in Table 9.
The exclusion of 144 subjects did not effect the significance of the score and comparison
with the main analysis results in Table 8 showed little difference apart from a slight
reduction of hazard ratio for a value of 2. This is more than likely a result of the sample
size, with this additional regression excluding 13 of 78 deaths in this category.

In the interest of simplicity, only all-cause mortality was considered as the primary
outcome, yet much information could be gained by recording cause-specific mortality

406 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE 9
Results of Cox Regression Adjusted for Reverse Causality

Variable Coefficient p-Value HR (95% CI)

Score III 0.32 0.00 1.38 (1.27–1.51)

0 1 (Reference)

1 0.32 0.00 1.38 (1.26–1.52)

2 0.66 0.00 1.93 (1.49–2.49)

or onset of particular conditions as well. Without consideration of cause of death, we
may be misrepresenting the significance of our health score as deaths may not always be
for health reasons. For example, the leading cause of death for 20- to 34-year-olds was
suicide, with 24 and 12 percent of male and female deaths in this age group, respectively
(Office for National Statistics, 2017). These deaths, among others, would have no causal
relationship with our health score.

Finally, a key attraction of using wearables to price insurance is their ability to take mea-
surements consistently through time, allowing the insured’s risk profile to be updated
in real time without the need to visit a doctor. Due to the nature of the survey data used
in this study, the covariates in our model are assumed to remain constant in time. Future
waves of follow-up data could be incorporated, increasing the applicability of the model
to the real world at the sacrifice of simplicity. This would help to account for behavioral
or physical changes resulting in misclassification (Kvaavik et al., 2010). The stability of
certain behaviors of metrics differs over time, with several studies describing the stabil-
ity of physical activity over time as low or moderate (Telama et al., 2005; Parsons et al.,
2006). In this, we must assume that some degree of stability exists, as evidenced by the
significance in the model’s coefficients.

CONCLUSION
In conclusion, Health Score III acts as a proof of concept, demonstrating the potential
for the inclusion of rating factors, based on wearables data, to be included in health
and life insurance pricing models. The model also potentially acts as a starting point
for wearable-derived data inclusion in a more fully formed pricing model, especially
those that wish to utilize rating factors such as resting heart rate, blood pressure, sleep
duration, and walking duration. The suitability of the existing metrics would require
further evaluation with weighting, substitution, and erasures taking place. With this in
mind, there are several areas that could provide the basis for future research.

As this model only considered all-cause mortality as an event of interest, it is not directly
applicable to pricing health insurance in its current form. An investigation into cause-
specific mortality however would be the first movement in this direction, and inclusion
of the onset of disease or other conditions would be a logical next step. It is worth noting
that a larger data set would be required to provide enough occurrences of each condition
to produce statistically significant results. At a certain point, it would also be necessary
to run the model on a continuous data set in order to better simulate the real-world data
it was developed for.

PRICING INSURANCE USING WEARABLE TECHNOLOGY 407

REFERENCES
Aalen, O., 1978, Nonparametric Inference for a Family of Counting Processes, Annals of

Statistics, 6(4): 701-726.
Abraham, M., 2016, Wearable Technology: A Health-and-Care Actuary’s Perspective,

Institute and Faculty of Actuaries.
Azzopardi, M., and D. Cortis, 2013, Implementing Automotive Telematics for Insurance

Covers of Fleets, Journal of Technology Management & Innovation, 8(4): 59-67.
Becher, S., 2016, Wearables—A New Chance for Private Insurance Companies From the

Underwriting View, Zeitschrift für die gesamte Versicherungswissenschaft, 105(5): 563-565.
Belsley, D. A., E. Kuh, and R. E. Welsch, 2005, Regression Diagnostics: Identifying Influential

Data and Sources of Collinearity, Vol. 571 (Hoboken, NJ: John Wiley & Sons).
Blaxter, M., 1987, Evidence on Inequality in Health From a National Survey, Lancet,

330(8549): 30-33.
Cappuccio, F. P., L. D’Elia, P. Strazzullo, and M. A. Miller, 2010a, Quantity and Quality

of Sleep and Incidence of Type 2 Diabetes, Diabetes Care, 33(2): 414-420.
Cappuccio, F. P., L. D’Elia, P. Strazzullo, and M. A. Miller, 2010b, Sleep Duration and

All-Cause Mortality: A Systematic Review and Meta-Analysis of Prospective Studies,
Sleep, 33(5): 585-592.

Catlin, T., J. T. Lorenz, B. Münstermann, B. Olesen, and V. Ricciardi, 2017, Insurtech—The
Threat That Inspires (Stamford, CT: Mckinsey & Company, Financial Services).

Cleves, M., W. Gould, R. G. Gutierrez, and Y. V. Marchenko, 2010, An introduction to
Survival Analysis Using Stata (College Station, TX: Stata Press).

Comstock, J., 2015, Fitbit Adds Auto-Detection of Biking, Running, Elliptical, and
More. Retrieved from https://www.mobihealthnews.com/48764/fitbit-adds-auto-
detection-of-biking-running-elliptical-and-more

Cox, B. D., 1988, Health and Lifestyle Survey, 1984-1985 [data collection], SN: 2218 (Essex,
England: UK Data Service).

Cox, B. D., M. Blaxter, A. L. J. Buckle, N. P. Fenner, J. F. Golding, M. Gore, F. A. Huppert,
J. Nickson, M. Roth, J. Stark, et al., 1987, The Health and Lifestyle Survey: Preliminary
Report of a Nationwide Survey of the Physical and Mental Health, Attitudes and Lifestyle of
a Random Sample of 9,003 British Adults (London: Health Promotion Research Trust).

Cox, D. R., 1972, Regression Models and Life Tables (with Discussion), Journal of the Royal
Statistical Society, 34: 187-220.

Cox, D. R., and E. J. Snell, 1968, A General Definition of Residuals, Journal of the Royal
Statistical Society Series B (Methodological), 30: 248-275.

Dart, A., 2015, The Case for Connected Wearables in Insurance, Asia Insurance Re-
view.Retrieved from http://www.asiainsurancereview.com/Magazine/ReadMaga-
zineArticle/aid/35855/The-case-for-Connected-Wearables-in-Insurance

Department of Health, 2016, UK Chief Medical Officers Low Risk Drinking Guidelines.
Ding, D., K. Rogers, H. van der Ploeg, E. Stamatakis, and A. E. Bauman, 2015, Traditional

and Emerging Lifestyle Risk Behaviors and All-Cause Mortality in Middle-Aged and
Older Adults: Evidence From a Large Population-Based Australian Cohort, PLoS
Medicine, 12(12): e1001917.

408 RISK MANAGEMENT AND INSURANCE REVIEW

Ford, E. S., G. Zhao, J. Tsai, and C. Li, 2011, Low-Risk Lifestyle Behaviors and All-Cause
Mortality: Findings From the National Health and Nutrition Examination Survey III
Mortality Study, American Journal of Public Health, 101(10): 1922-1929.

Galea, S., and M. Tracy, 2007, Participation Rates in Epidemiologic Studies, Annals of
Epidemiology, 17(9): 643-653.

Ganna, A., and E. Ingelsson, 2015, 5 Year Mortality Predictors in 498 103 UK Biobank
Participants: A Prospective Population-Based Study, Lancet, 386(9993): 533-540.

Gatzert, N., and H. Wesker, 2014, Mortality Risk and Its Effect on Shortfall and Risk
Management in Life Insurance, Journal of Risk and Insurance, 81(1): 57-90.

Georgakis, M. K., A. D. Protogerou, E. I. Kalogirou, E. Kontogeorgi, I. Pagonari, F. Sa-
rigianni, S. G. Papageorgiou, E. Kapaki, C. Papageorgiou, D. Tousoulis, et al., 2017,
Blood Pressure and All-Cause Mortality by Level of Cognitive Function in the Elderly:
Results From a Population-Based Study in Rural Greece, Journal of Clinical Hyperten-
sion, 19(2): 161-169.

Glei, D. A., N. Goldman, G. Rodrı́guez, and M. Weinstein, 2014, Beyond Self-Reports:
Changes in Biomarkers as Predictors of Mortality, Population and Development Review,
40(2): 331-360.

Gopinath, B., V. M. Flood, G. Burlutsky, and P. Mitchell, 2010, Combined Influence of
Health Behaviors on Total and Cause-Specific Mortality, Archives of Internal Medicine,
170(17): 1605-1607.

Gore, R., 2015, Insurance, Innovation and IoT: Insurers Have Their Say on the Internet
of Things, FC Business Intelligence.

Gray, R. J., 1990, Some Diagnostic Methods for Cox Regression Models Through Hazard
Smoothing, Biometrics, 46(1): 93-102.

Gruenewald, T. L., T. E. Seeman, C. D. Ryff, A. S. Karlamangla, and B. H. Singer, 2006,
Combinations of Biomarkers Predictive of Later Life Mortality, Proceedings of the Na-
tional Academy of Sciences, 103(38): 14158-14163.

Hakim, A. A., H. Petrovitch, C. M. Burchfiel, G. W. Ross, B. L. Rodriguez, L. R. White,
K. Yano, J. D. Curb, and R. D. Abbott, 1998, Effects of Walking on Mortality Among
Nonsmoking Retired Men, New England Journal of Medicine, 338(2): 94-99.

Hamer, M., C. J. Bates, and G. D. Mishra, 2011, Multiple Health Behaviors and
Mortality Risk in Older Adults, Journal of the American Geriatrics Society, 59(2):
370-372.

Harrell, F. E., R. M. Califf, D. B. Pryor, K. L. Lee, and R. A. Rosati, 1982, Evaluat-
ing the Yield of Medical Tests, Journal of the American Medical Association, 247(18):
2543-2546.

Hilton, A., 2017, Insurtech Is Set to Take the Insurance Industry by Storm,
Raconteur.

Iqbal, M. U., and S. Lim, 2006, A Privacy Preserving GPS-Based Pay-as-You-Drive In-
surance Scheme, Symposium on GPS/GNSS (IGNSS2006), pp. 17-21.

Jensen, M. T., P. Suadicani, H. O. Hein, and F. Gyntelberg, 2013, Elevated Resting Heart
Rate, Physical Fitness and All-Cause Mortality: A 16-Year Follow-Up in the Copen-
hagen Male Study, Heart, 99(12): 882-887.

PRICING INSURANCE USING WEARABLE TECHNOLOGY 409

Jubraj, R., S. Watson, and S. Tottman, 2017, The Rise of Insurtech, Accenture.
Kaplan, E. L., and P. Meier, 1958, Nonparametric Estimation from Incomplete Observa-

tions, Journal of the American Statistical Association, 53(282): 457-481.
Khaw, K., N. Wareham, S. Bingham, A. Welch, R. Luben, and N. Day, 2008, Combined

Impact of Health Behaviours and Mortality in Men and Women: The Epic-Norfolk
Prospective Population Study, PLoS Medicine, 5(1): e12.

Knoops, K. T., L. C. de Groot, D. Kromhout, A.-E. Perrin, O. Moreiras-Varela, A. Menotti,
and W. A. Van Staveren, 2004, Mediterranean Diet, Lifestyle Factors, and 10-Year
Mortality in Elderly European Men and Women: The Hale Project, JAMA, 292(12):
1433-1439.

Kuh, D., R. Hardy, M. Hotopf, D. A. Lawlor, B. Maughan, R. Westendorp, R. Cooper, S.
Black, and G. Mishra, 2009, A Review of Lifetime Risk Factors for Mortality, British
Actuarial Journal, 15(S1): 17-64.

Kvaavik, E., G. D. Batty, G. Ursin, R. Huxley, and C. R. Gale, 2010, Influence of Individ-
ual and Combined Health Behaviors on Total and Cause-Specific Mortality in Men
and Women: The United Kingdom Health and Lifestyle Survey, Archives of Internal
Medicine, 170(8): 711-718.

Leitzmann, M. F., Y. Park, A. Blair, R. Ballard-Barbash, T. Mouw, A. R. Hollenbeck,
and A. Schatzkin, 2007, Physical Activity Recommendations and Decreased Risk of
Mortality, Archives of Internal Medicine, 167(22): 2453-2460.

Lindström, J., and J. Tuomilehto, 2003, The Diabetes Risk Score, Diabetes Care, 26(3):
725-731.

Lucena, F., A. K. Barros, and N. Ohnishi, 2016, The Performance of Short-Term Heart Rate
Variability in the Detection of Congestive Heart Failure, BioMed Research International,
2016: Article No. 1675785.

Mann, J., 2017, The Ultimate Guide to Sleep Tracking. Retrieved from https://sleep-
junkies.com/features/the-ultimate-guide-to-sleep-tracking/

Maudsley, G., and E. Williams, 1996, Inaccuracy in Death Certification—Where Are We
Now, Journal of Public Health, 18(1): 59-66.

Nechuta, S. J., X.-O. Shu, H.-L. Li, G. Yang, Y.-B. Xiang, H. Cai, W.-H. Chow, B. Ji, X.
Zhang, W. Wen, et al., 2010, Combined Impact of Lifestyle-Related Factors on Total
and Cause-Specific Mortality Among Chinese Women: Prospective Cohort Study,
PLoS Medicine, 7(9): e1000339.

Nelson, W., 1972, Theory and Applications of Hazard Plotting for Censored Failure Data,
Technometrics, 14(4): 945-966.

Office for National Statistics, 2016, National Life Tables, UK: 2013–2015 [Statistical Bul-
letin].

Office for National Statistics, 2017, Avoidable Mortality in England and Wales: 2015
[Statistical Bulletin].

Parsons, T. J., C. Power, and O. Manor, 2006, Longitudinal Physical Activity and Diet
Patterns in the 1958 British Birth Cohort, Medicine and Science in Sports and Exercise,
38(3): 547-554.

410 RISK MANAGEMENT AND INSURANCE REVIEW

Pitacco, E., 2014, Health Insurance: Basic Actuarial Models (Cham, Switzerland: Springer
International Publishing).

Redlitz, H., 2017, Wrist-Size Wearables Will Help You Keep Your Blood Pressure
in Check.Retrieved from https://wearablezone.com/news/wearables-track-blood-
pressure-levels/

Schoenfeld, D., 1982, Partial Residuals for the Proportional Hazards Regression Model,
Biometrika, 69(1): 239-241.

Schwartz, J. L., and M. A. Hamilton, 2015, Accessory Overload: Wearable Technology’s
Impact on the Insurance Industry, Voice Magazine.

Seccareccia, F., F. Pannozzo, F. Dima, A. Minoprio, A. Menditto, C. Lo Noce, and S.
Giampaoli, 2001, Heart Rate as a Predictor of Mortality: The Matiss Project, American
Journal of Public Health, 91(8): 1258-1263.

Shcherbina, A., C. M. Mattsson, D. Waggott, H. Salisbury, J. W. Christle, T. Hastie,
M. T. Wheeler, and E. A. Ashley, 2017, Accuracy in Wrist-Worn, Sensor-Based
Measurements of Heart Rate and Energy Expenditure in a Diverse Cohort, Journal
of Personalized Medicine, 7(2): 3.

StataCorp, 2015, Stata Statistical Software: Release 14 (College Station, TX: StataCorp LP).
Sultan, N., 2015, Reflective Thoughts on the Potential and Challenges of Wearable Tech-

nology for Healthcare Provision and Medical Education, International Journal of Infor-
mation Management, 35(5): 521-526.

Tehrani, K., and A. Michael, 2014, Wearable Technology and Wearable
Devices–Everything You Need to Know. Wearable Devices. Retrieved from
http://www.wearabledevices.com/what-is-a-wearable-device/

Telama, R., X. Yang, J. Viikari, I. Välimäki, O. Wanne, and O. Raitakari, 2005, Physical
Activity from Childhood to Adulthood: A 21-Year Tracking Study, American Journal of
Preventive Medicine, 28(3): 267-273.

Therneau, T. M., P. M. Grambsch, and T. R. Fleming, 1990, Martingale-Based Residuals
for Survival Models, Biometrika, 77(1): 147-160.

Thomas, R. G., 2008, Loss Coverage as a Public Policy Objective for Risk Classification
Schemes, Journal of Risk and Insurance, 75(4): 997-1018.

Tudor-Locke, C., C. L. Craig, Y. Aoyagi, R. C. Bell, K. A. Croteau, I. De Bourdeaudhuij, B.
Ewald, A. W. Gardner, Y. Hatano, L. D. Lutes, et al., 2011, How Many Steps/Day Are
Enough? For Older Adults and Special Populations, International Journal of Behavioral
Nutrition and Physical Activity, 8(1): 80.

van Dam, R. M., T. Li, D. Spiegelman, O. H. Franco, and F. B. Hu, 2008, Combined Impact
of Lifestyle Factors on Mortality: Prospective Cohort Study in US Women, BMJ, 337:
a1440.

van den Brandt, P. A., 2011, The Impact of a Mediterranean Diet and Healthy Lifestyle on
Premature Mortality in Men and Women, American Journal of Clinical Nutrition, 94(3):
913-920.

Wahlström, J., I. Skog, and P. Händel, 2015, Driving Behavior Analysis for Smartphone-
Based Insurance Telematics, 2nd Workshop on Physical Analytics, WPA 2015, May 22,
pp. 19-24. ACM Digital Library.

PRICING INSURANCE USING WEARABLE TECHNOLOGY 411

Wilson, P. W. F., R. B. D’Agostino, D. Levy, A. M. Belanger, H. Silbershatz, and W. B.
Kannel, 1998, Prediction of Coronary Heart Disease Using Risk Factor Categories,
Circulation, 97(18): 1837-1847.

Wong, M. L., E. Y. Y. Lau, J. H. Y. Wan, S. F. Cheung, C. H. Hui, and D. S. Y. Mok, 2012,
The Interplay Between Sleep and Mood in Predicting Academic Functioning, Physi-
cal Health and Psychological Health: A Longitudinal Study, Journal of Psychosomatic
Research, 74(4): 271-277.

World Health Organization, 1995, Physical Status: The Use of and Interpretation of An-
thropometry Report of a WHO Expert Committee (Geneva, Switzerland: World Health
Organization).

World Health Organization and International Society of Hypertension Writing Group,
2003, 2003 World Health Organization (WHO)/International Society of Hypertension
(ISH) Statement on Management of Hypertension, Journal of Hypertension, 21(11): 1983-
1992.

Yates, H., 2017, Personal Data May Penalise “Uninsurables,” Raconteur.
Zhang, D., X. Shen, and X. Qi, 2015, Resting Heart Rate and All-Cause and Cardiovascu-

lar Mortality in the General Population: A Meta-Analysis, Canadian Medical Association
Journal, 188(3): E53-E63.

Copyright of Risk Management & Insurance Review is the property of Wiley-Blackwell and
its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.

Risk Management and Insurance Review

C© Risk Management and Insurance Review, 2018, Vol. 21, No. 3,

435

-452
DOI: 10.1111/rmir.12109

FEATURE ARTICLE

INTEGRATING A PROACTIVE TECHNIQUE INTO A HOLISTIC
CYBER RISK MANAGEMENT APPROACH
Angelica Marotta
Michael McShane

ABSTRACT
Cyber threats are an emerging risk posing a range of challenges to organizations
of all sizes. Corporate risk managers need to understand that cyber risk man-
agement must not be a silo in the IT department. Cyber threats are the result
of intelligent adaptive agents that cannot be managed by traditional risk man-
agement techniques only. The article describes the honeypot concept, which is
a proactive measure for identifying and gathering information about attackers
in order to develop suitable and effective countermeasures. In addition, this
article proposes the integration of the honeypot concept into a cyber risk man-
agement approach based on the five preparedness mission areas of the Federal
Emergency Management Agency (FEMA).

INTRODUCTION
Cyber risk has been a topic for years in IT and computer science journals, but only a few
cyber risk articles have appeared in risk management and insurance journals. Hovav
and D’Arcy (2003) and Gatzlaff and McCullough (2010) have researched cyber attacks
from a financial economics perspective to investigate the effect of cyber attacks on an
organization’s stock price. Biener et al. (2015) and Eling and Schnell (2016) provide an
overview of the evolving cyber insurance market and the insurability of cyber risk,
while Eling and Loperfido (2017) investigate and model data breaches from an actuarial
perspective. These articles investigate cyber-related issues, but not the cyber risk man-
agement process itself. Organizations can no longer afford to let cybersecurity dwell in
a technical silo. Cyber threats are different from the risks faced by corporate risk man-
agers. Unlike typical corporate risks, cyber threats result from intelligent actors who
can adapt and change tactics as defenses are implemented, thus rendering past data
quickly obsolete as a predictor of future attacks. In addition, cyber risks are plagued
by information asymmetry, correlated loss, and interdependent security issues (Biener
et al., 2015; Marotta et al., 2017; McShane et al., 2018; Shetty et al., 2018) that hamper
traditional risk management and insurance practices from being effective.

Angelica Marotta works at IIT-Italian National Research Council, Pisa, Italy, and is Research
Affiliate, MIT Sloan School of Management, Cambridge, MA; e-mail: angelica.marotta@iit.cnr.it.
Michael McShane is Associate Professor of Risk Management and Insurance, Old Dominion
University, Norfolk, VA; e-mail: mmcshane@odu.edu.

435

436 RISK MANAGEMENT AND INSURANCE REVIEW

Like traditional terrorists, cyber criminals have an asymmetric information advantage
and only need to be right once, while defenders need to be correct every time. Cyber
threats are systemic and require much less effort than required for physical terrorism.
A single cyber criminal can attack multiple organizations simultaneously. In addition,
cyber risks are interdependent (Hofmann, 2007; Hofmann and Ramaj, 2011; Ogut et al.,
2005) meaning that the security of an organization depends not only on an organization’s
actions, but on the actions/inactions of other entities, such as contractors and suppliers.
Risk managers need to understand the emerging cyber risk threat and work together
with IT specialists to manage cyber risks in a holistic manner.

Organizations face a growing list of cyber threats, such as data and intellectual property
theft, ransomware, and distributed denial-of-service (DDOS) attacks that shut down
websites. Cyber crime costs the global economy approximately $445 billion a year with
the world’s largest economies accounting for around half of this, and is expected to
increase in the coming years (Allianz, 2015). Rapid growth in cyber threats has been
accompanied by a worrying change in attackers’ purposes and techniques that can
render cybersecurity measures ineffective.

Progress enabled by the Internet opens new and easy ways of gathering information.
Even a relatively inexperienced attacker can perpetrate potentially devastating attacks
with large-scale consequences for organizations. An Institute for Critical Infrastructure
Technology (ICIT) report argues that even a “script kiddie”1 could cause serious damage
to the system of a major healthcare provider, using only phishing attacks and exploit
kits available on the Internet (ICIT, 2016). Generally, these attacks occur because organi-
zations have common vulnerabilities (Böhme, 2005; Shetty et al., 2018), which are unin-
tended flaws or design errors that enable an attacker to access multiple organizations.

The purpose of this article is to make risk management researchers and practitioners
aware of emerging cyber concerns; introduce the honeypot concept, which is a proactive
tool for identifying and managing cyber risks by better understanding cyber intrud-
ers; and propose the integration of this tool into a FEMA-based preparedness model.
Depending on the goals of implementation, honeypot technology can range from simple
low-interaction software emulation of services and applications to high-interaction hon-
eypots that can include an actual operating system and other real resources. In the most
recent definition, honeypots are decoy systems implemented to attract cyber attackers
with the purpose of learning to overcome the attacker’s information advantage and also
to distract the intruders and to protect the real system.

This article is structured as follows. The next section “Honeypot Basics and
Relevant Research” provides a basic understanding of honeypots and highlights rel-
evant research. The subsequent section “Cyber Risk Management Problem Statement”
discusses a major issue facing cybersecurity followed by a proposed honeypot solution
to this problem. Then the integration of this solution into a cyber risk management
model based on the FEMA emergency management approach is outlined followed by
the section “Implementation of Production Honeypots Into the Network” describing
the implementation of a production honeypot in a corporate network. The final section
“Conclusion” concludes and suggests future research on cyber risk management.

1 An inexperienced individual who performs cyber attacks by using tools developed by experts.

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 437

HONEYPOT BASICS AND RELEVANT RESEARCH
The widely accepted definition of honeypot is provided by Spitzner (2002a): “a resource
whose value is being in attacked or compromised.” This definition captures the two
main guiding principles on which every honeypot system is based. First, a honeypot
is a resource, meaning that it may consist of any mechanism, either hardware or soft-
ware, that is necessary to emulate the original system, but is separate from the original
system. For example, this can include workstations, hosts, routers, databases, printers,
programs, or even a simple e-mail. Second, regardless of how a honeypot is imple-
mented and its ultimate security purposes, the primary goal is that the honeypot needs
to be discovered, attacked, and possibly compromised. A honeypot’s primary value is
directly proportional to its ability to attract malicious activities, which is much different
from the goal of other cybersecurity tools and risk management techniques in general.
Therefore, a honeypot can be described as any system that is explicitly implemented
to allow an attacker to identify it as a potentially vulnerable system and to attack the
honeypot in various ways.

The widespread presence of security threats has been one of the most studied topics
in the cyber security field. Initially, the majority of cybersecurity papers focused on
viruses, which were the main cause of computer security issues. As viruses evolved
over the years, resulting in sophisticated attacks designed to directly hit predetermined
targets and cause severe damage, most studies shifted to more complex approaches. The
development of a vast hidden market of malicious code and cyber vulnerabilities has
led several scholars to concentrate their efforts on alternative cybersecurity techniques,
such as intrusion detection and prevention systems and recently, honeypots.

Honeypots are not a new concept. In the late 1980s, several scholars began to examine the
possibility of using honeypots as a security mechanism. One of the first studies related to
the honeypot concept dates to Clifford Stoll’s book entitled The Cuckoo’s Egg (Stoll, 1989).
The author shows how to create an improvised honeypot system to monitor an intruder’s
behavior. However, the honeypot described cannot be considered a real honeypot as the
author used resources that also allowed legitimate users to carry out their tasks. The
current honeypot concept uses a decoy system, not intended for legitimate users, to
specifically attract and fool intruders. Nevertheless, this book is particularly important
as it anticipates two concepts that now characterize honeypot technology. First is the
concept of proactive protection. Defense to protect computer systems can no longer just
be passive when facing potential intruders, but needs to be proactive in dealing with the
causes of intrusions. The second concept includes the importance of the acquisition of
information. The study of the intruder’s behavior and actions is certainly an additional
resource against future attacks. Other early work on the honeypot concept is a technical
article written by a security expert at AT&T Bell Labs (Cheswick, 1992). This article is the
first description of a true honeypot, although the author never uses the term “honeypot.”
The article describes how a system can be used and configured to encourage intrusion.
The documented technique does not involve systems normally accessed by legitimate
users, but rather tools dedicated to attract and understand hackers.

Even though these two studies effectively represent and explain the potential of hon-
eypots, a few years passed before honeypot research studies were published. A likely
reason for this delay was the lack of a widely accepted honeypot definition, resulting in
divergent views and confusion among scholars. Lance Spitzner, a founder of a research

438 RISK MANAGEMENT AND INSURANCE REVIEW

group called “Project Honeynet,” proposed a now widely used “honeypot” definition
that was provided at the beginning of this section (Spitzner, 2002a). This nonprofit
group engaged in the research and investigation of cyber attacks and was one of the
first to elaborate on the modern honeypot concept as a deception system for acquiring
intruder information (Spitzner, 2003). Recent years have witnessed significant progress
and research initiatives in the honeypot field and the implementation of new and more
powerful honeypot systems. The majority of recent work on honeypots includes the
description of honeypots, their categories, their advantages and disadvantages, their
level of interaction and usage, and their applications. Honeypots can be categorized by
purpose and level of interaction (Mokube and Adams, 2007; Sadasivam et al., 2005) as
follows.

Purpose
Honeypot purpose can be categorized as production or research. Production honeypots
are typically best suited for implementation within an enterprise to mitigate risks posed
by cyber intruders to that specific enterprise. The purpose is to understand specific
systems hackers are probing and exploits being used. Research honeypots are designed
to gain information about blackhat hackers in general and are not set up within a specific
organization. Typical goals of a research honeypot are, for example, to learn who the
hackers are, how they communicate, and the tools being developed. Research honeypots
can capture large amounts of data on blackhat hackers but are complex and require time
and effort to deploy and maintain. Production honeypots are typically simpler to build,
deploy, and maintain than research honeypots.

Level of Interaction
Low-interaction honeypots emulate resources, such as services and applications,
whereas high-interaction honeypots are real resources. Low-interaction honeypots are
easier to deploy and maintain, but also easier for attackers to detect, resulting in a low
collection of information about the attacker. This type of honeypot also poses less risk
of the actual system being compromised and can provide useful information, such as
related to spammers. High-interaction honeypots are complex, require substantial re-
sources, use a real operating system, and can collect large amounts of data that can be
analyzed to overcome the hacker information advantage and potentially thwart new
types of attacks. A high-interaction honeypot poses more risk and requires constant
monitoring to prevent intruders from using the honeypot to gain entry into the actual
system.

Beyond specific honeypot categorization, other studies emphasize the following security
issues, some of which overlap.

Digital Forensics
Some authors describe honeypots as an essential tool for the forensic practice. For
example, Kebande et al. (2016) propose a honeypot model aimed at reducing the effort
required to conduct Digital Forensic Investigations (DFIs) by collecting potential digital
evidence and making it available to digital forensic investigators. The authors offer an
interesting use of honeypots as a strategy to learn about and analyze intruder behavior
and operational capability.

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 439

Deception, Detection, and Mitigation
Research has investigated the deception aspect of honeypots as well (Dacier et al., 2004;
Kuwatly et al., 2004). Valli (2007) argues that honeypot deception is “based largely on a
premise of masking the real”; that is, an attacker is intentionally misled about network
structure or weak points. He provides a comprehensive overview of honeypots as de-
ception tools, explores the potential of internal honeypots, and discusses the potential
superiority of using honeypots to thwart malicious insiders relative to other security
technologies, such as firewalls, intrusion detection systems (IDSs), and intrusion pre-
vention systems (IPSs). Deception tools, such as honeypots, also improve the detection
and mitigation of security threats. Khattab et al. (2006) and Moore (2016) examine how
honeypot principles could be utilized to detect and mitigate attacks, such as spoofing,2

DDOS, and ransomware attacks. Paradise et al. (2017) investigate the application of hon-
eypots to the reconnaissance phase of advanced persistent threats (APTs) as a way to
collect basic indications of potential forthcoming attacks. A common theme to these ar-
ticles is that honeypots can add an additional layer of security for networks and provide
security capabilities not possible by other measures.

Simulation
Honeypots can simulate a variety of information systems and environments. Litchfield
et al. (2016) argue that the value of honeypots is directly proportional to their ability
to fool attackers into believing they are authentic machines. The authors illustrate the
potential of real-time simulation, while protecting the authenticity of the original system.

Defense
Another common feature of honeypots is their ability to act as defense mechanisms.
Weiler (2002) and Douligeris and Mitrokotsa (2004) investigate the defense of vulnerable
systems using honeypots to analyze attackers and defenders in a given cyber environ-
ment. These authors collect real-world intelligence on attacks and profile all network
activities to counter malicious attacks or threats. In the honeypot context, defense strate-
gies generally include methods, such as reducing the appeal of the environment to the
potential intruders, acquiring a better understanding of the critical vulnerabilities, and
enhancing reaction and response capabilities. Wang et al. (2017) investigate honeypots
as defense systems to detect and gather attack information. In particular, they focus on
the interactions between the attackers and the defenders and derive optimal strategies
for both sides to achieve an optimal defense strategy. Other authors (Gutierrez and
Kiekintveld, 2017; Pawar et al., 2014) focus on the use of honeypot decoy features to
implement cyber defense systems with the purpose of developing models and methods
to acquire information about the blackhat community, understanding their strategies,
and improving security systems accordingly.

Network Security and Protection
As networks increase in speed and process more information, the necessity to adopt
powerful tools to keep up with these changes is essential. Mairh et al. (2011) identify

2 Spoofing is a type of cyber attack that includes the identity falsification of another entity
(a person or a program) with the aim of performing malicious activities.

440 RISK MANAGEMENT AND INSURANCE REVIEW

honeypots as an efficient tool to monitor and protect network traffic since honeypots
require minimal resources and perform efficiently on large networks. Unlike some other
security issues mentioned in this section, this approach does not include the use of a
honeypot as a dynamic tool within a cyber environment. The honeypot’s purpose is
limited to providing a secure environment necessary to ensure operational continuity of
critical services. For example, Bailey et al. (2004) implement a honeypot system with high
threat monitoring capabilities aimed at protecting against existing and future threats.

CYBER RISK MANAGEMENT PROBLEM STATEMENT
The experience built over the years has shown that complete security of a computer
system is not achievable. Generally, ensuring absolute protection is not realistic,
given the necessity to appropriately balance costs and benefits of security measures
(Combs, 2013). Organizations need a strong commitment to fortify and enhance their
systems. Although cybersecurity is a high-profile topic, the majority of organizations
are still unprepared. Often organizations have weaknesses in their security systems and
underestimate the consequences of exposure (Solomon and Chapple, 2005). A study
conducted by the Ponemon Institute states that 75 percent of organizations in the United
States are not prepared to respond to cyber attacks (Scibelli, 2015). The report surveyed
more than 600 IT and security executives about their organizations’ approaches to build-
ing resilience. More than half (55 percent) stated that their organization lacked sufficient
risk awareness, analysis, and assessments in combating cyber attacks (Scibelli, 2015).

Such evidence implies that IT security experts generally do not follow comprehensive
risk management approaches. Protective barriers, such as firewalls and antivirus soft-
ware, are defensive measures against cyber attacks. However, while every organization
today relies on a cybersecurity strategy of perimeter defense to protect their networks,
security breaches are actually increasing in intensity, frequency, and complexity. Even
with these defenses, malicious cyber actors maintain an information advantage over
organizations trying to protect their systems. Additionally, these breaches are often
not detected until well after the first intrusion. This trend indicates that this defensive
strategy may be ineffective and requires continuous security monitoring (Collins, 2017).

One of the most common security weaknesses is the time delay for detecting the in-
trusion and securing the system after the first intrusion. According to Moshiri (2015), a
prudent course of action against intruders consists of working on minimizing the dam-
age to an organization by detecting an intrusion in a timely fashion. Cyber criminals
have figured out how to avoid detection by the common security tools utilized by or-
ganizations. The vast majority of instruments and methodologies employed by security
experts are defense oriented, which can be defeated by adaptive attackers. When the
intrusion is finally detected, the organization implements targeted countermeasures on
an ad hoc basis to mitigate the impact (Hoopes, 2009; Whitman and Mattord, 2007).
An IBM/Ponemon (2017) study of 419 companies in 13 countries found that the mean
time to become aware of a cyber breach is 191 days and it calculated that companies
identifying breaches sooner had much lower total costs related to the breach.3

3 Specifically, the average cost for a data breach identified in less than 100 days was $3.23 million
versus $4.38 million for 100 days or more in 2016.

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 441

This defensive approach does little to overcome the attacker’s information advantage,
which can only be alleviated by proactively understanding the motivations and tactics of
the intruders. This article proposes a measure that allows cyber risk managers to inves-
tigate the methods of attackers, which allows the design of targeted solutions to prevent
and reduce the impact of cyber attacks. In this regard, the usage of data collection and
analysis related to the attacker may be essential to decision making. For example, such
analysis may determine whether it is necessary to update the software of an existing
cybersecurity system or to adopt a new one to stay ahead of cyber attackers trying to com-
promise a system (Paté-Cornell et al., 2017). Thus, proactive data gathering and analysis
of attackers can help achieve both short- and long-term cyber risk management goals.

SOLUTION: A PROACTIVE CYBER RISK MANAGEMENT TECHNIQUE
A solution to alleviate the previously highlighted issues is to apply to the cybersecurity
field one of the oldest and most commonly used war techniques: know your enemy
(Tzu, 1971). To apply this technique, we consider a honeypot to be a proactive risk
management tool that can be used to understand potential cyber criminals and apply
the learning to both prevent cyber attacks and detect intrusions earlier, reducing the
impact of intrusions that cannot be prevented.

In a typical scenario, cyber criminals penetrate the corporate system, make modifica-
tions, compromise an important service within the network, and steal sensitive data.
In many cases, the organization may not become aware of the intrusion for a sub-
stantial period of time, compounding the impact on the organization. The longer the
time lag, the more difficult it will be for cybersecurity specialists to understand how
the attack was performed and apply this learning to prevent future attacks (Paradise
et al., 2017). If a honeypot had been employed, a more desirable outcome would oc-
cur. The hackers would be maneuvering in a decoy world created by cybersecurity
professionals to observe and learn from their malicious activities in a safe environ-
ment, such as what ports they tried to open, what data they tried to acquire, and areas
they tried to access (Gutierrez and Kiekintveld, 2017). A honeypot allows cybersecu-
rity specialists to collect this information and use it to develop effective mitigation
strategies.

Honeypot technologies are still not widely deployed, so little evidence is available to
understand their effectiveness. A survey conducted by SANS found that companies with
honeypot experience rated their effectiveness an average of 7.35 out of 10 (Dominguez,
2017). Additionally, when asked how many times honeypots were triggered in the past
12 months, 38 percent of the respondents said have their honeypots were triggered 15 or
more times (Dominguez, 2017). Honeypot usage appears to be growing and the trend
is expected to continue. According to a forecast from Markets and Markets (2017), the
deception technology market size is estimated to grow to $2.09 billion by 2021. Honey-
pots are thought to be a worthy candidate to address the current cybersecurity needs
due to their powerful capacity for discovery and ability to adapt to any environment.
However, academic research on honeypot effectiveness is still in its early stages.

INTEGRATION OF HONEYPOTS INTO A CYBER RISK MANAGEMENT APPROACH
This manuscript proposes the integration of the honeypot technique into the FEMA
preparedness approach. This approach provides well-established perspectives and a

442 RISK MANAGEMENT AND INSURANCE REVIEW

holistic philosophy that promotes collaboration and shared understanding beyond the
IT department and has been successfully employed in emergency management and
disaster recovery efforts. The approach is focused on achieving a common, integrated
perspective across all mission areas—Prevention, Protection, Mitigation, Response, and
Recovery.

Just like the NIST Cybersecurity Framework, the FEMA approach is aimed at re-
ducing disaster losses and enhancing security. Both models share common goals
and components, but their structure is different, which explains our preference
for the FEMA approach over the NIST framework for this study. Specifically, the
NIST Framework covers the following critical framework functions: Identify, Protect,
Detect, Respond, and Recover (NIST, 2018). Because honeypots already include Iden-
tify and Detect as main features, the application of honeypot technologies to the NIST
Framework would result in redundancies. The FEMA preparedness model provides
a suitable structure that facilitates mitigation and prevention. This approach is built
on scalable, flexible concepts that move cyber risk management beyond a detection-
only mindset toward resilience goals of adapting and recovering quickly after attacks
occur.

The honeypot technique proposed in this work differs from other methods as it
focuses on understanding the functioning and the potential of cyber attacks in the
context of cyber risk management. Like in some martial arts, this method turns the
attackers’ offensive nature into a cybersecurity strength. We propose integrating
this concept into the five preparedness mission areas defined by the FEMA (2016):
Prevention, Protection, Mitigation, Response, and Recovery (Bullock et al., 2017).
In particular, this approach represents a comprehensive strategy that connects Pre-
vention with Recovery through a series of phases aimed at developing an effective
risk management scheme. The implementation of this solution is represented by
Figure 1, which is a Unified Modeling Language (UML)–type activity diagram to
formulate a dynamic analysis of a honeypot system model applied to the mission
areas.

The “activity diagram” models the process flow including the honeypot from the first
attempt at cyber intrusion to the positive resolution of an intrusion event. Input data
to establish the environmental context is necessary to start the procedure. The aim of
this model is not only providing security for an IT system, but also learning from the
behaviors of the attackers. For example, information about their activities may reveal
who is interested in accessing the system (e.g., employees or former employees), what
they rely on (e.g., an external server), their techniques, the degree of effort to hide
themselves, and their motivation, for example, whether their goal is mainly monetary or
just to sabotage the organization. In particular, the diagram focuses on the interactions
between the honeypot system and the five mission areas, each playing a key role within
the framework.

Prevention (Early Warning)
This area focuses on securing the cyber environment and its infrastructure from unau-
thorized or malicious access, use, or exploitation (Department of Homeland Security,
2014). In this phase, the honeypot acts as an early warning platform to help identify ma-
licious activity and stop it before it spreads further across the real system. For example,

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 443

FIGURE 1
Honeypot Applied to the Five FEMA Preparedness Mission Areas

if the organization’s cyber risk management team can detect an intruder on an internal
network scanning for open files, then it is possible to protect the system proactively
before the intruder finds important files (Spitzner, 2002b).

Protection (Isolation)
The honeypot technique enables security experts to determine whether a system or
resource is likely to be compromised since the honeypot reproduces the real system.
Since the real IT infrastructure runs in isolation, the system is less vulnerable. In this
phase, a honeypot may be seen as a protection mechanism to safeguard the real resources.

Mitigation
The information collected in the Prevention phase can be used to further identify, an-
alyze, and prioritize threats for mitigation. In fact, the rapid and timely detection of
threats can be of great help in the development of mitigation strategies. Earlier detection
reduces overall costs of the intrusion. FEMA requires that the mitigation strategy of a
risk management plan responds to the particular risks facing the community (or the net-
work as is the case for cyber risk management) and includes a mix of potential mitigation
actions that not only identify and analyze current threats, but consider future threats as
well, such as the risk due to future attacks or additional development in general risk
areas. Mitigation capabilities need to help prioritize actions based on resources. As risk
mitigation evolves into a more proactive practice, which is made possible by honey-
pots, it is moving away from a focus on static measures or structures to nonstructural
approaches that include planning and management.

444 RISK MANAGEMENT AND INSURANCE REVIEW

FIGURE 2
Positioning Production Honeypots in the Organization’s Network [Color figure can be
viewed at wileyonlinelibrary.com]

Response
Honeypots can also be effective as an incident–response tool since honeypots can be
taken offline for analysis without affecting business operations (Meenakshi and Nalini
Sri, 2013), which allows reports to be prepared rapidly. Honeypots can include threat–
response tools that enable security administrators to monitor the activities of an intruder
and activate shutdown mechanisms based on attacker activity and frequency-based
policies (Harrison, 2003).

Recovery
The creation of data related to the behaviors of attackers or the monitoring of malicious
activities plays an essential role during the Recovery phase. This activity could be
performed through the creation of an image of the honeypot hard disk or a backup copy
of the files that contain the virtual machine housing the honeypot if the system provides
virtualization capabilities. This factor can speed up the recovery and restoration process
in case of an attack.

IMPLEMENTATION OF PRODUCTION HONEYPOTS INTO THE NETWORK
To achieve successful results with the implementation of the honeypot solution, at-
tention must be paid to the correct choice and positioning of production honeypots
within the organization’s network. This particular aspect can undermine or improve the
effectiveness of the honeypot implementation.

For example, production honeypots work properly if placed inside the security perimeter
protected by a firewall as shown in Figure 2 (Production Honeypot 1). In this position,

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 445

the honeypot can act as an alarm sensor, for example, if an attack succeeds in bypassing
the defensive barriers. A honeypot can also act as a deterrent to divert attacker attention
from valuable resources. Placing the honeypot outside the security perimeter is not
effective because many of the attacks would have been blocked by defense mechanisms.
However, since production honeypots provide reaction mechanisms to divert attacks
from outside, they can also be placed in a demilitarized zone (DMZ) where public
services, such as FTP, WWW, and SMTP, are normally placed, as shown in Figure 2
(Production Honeypot 2).

A specific example of this technique using a production honeypot to develop empirical
schemes is provided to illustrate a potential solution. For this experiment, after evalu-
ating several honeypot options, we decided to use Cowrie, which is a medium interac-
tion secure shell (SSH) honeypot4 implemented in Python and developed by Oosterhof
(2016). Generally, this type of honeypot is used to learn attack techniques and add an
additional security layer to the system. Its functioning is based on a trap represented by
a fake file system and a shell that allows attackers to run and execute commands inside
a simulated environment that provides realistic responses to the attackers (McCaughey,
2017; Oosterhof, 2016).

To test how a honeypot can be mapped to this approach, we installed Cowrie and
configured it using Oosterhof’s code. In the beginning, we performed port scanning
before and after the honeypot initialization phase. This operation was executed through
Network Mapper (Nmap), which is a tool for network exploration and security auditing.5

This utility scans for open ports and then looks up the common port service in a local
text file (Solomon and Chapple, 2005). In the subsequent attack phase, we observed an
attacker guessing the password, accessing the honeypot, and running the commands to
install backdoors on the server. This phase can greatly help cybersecurity experts build
an early warning platform and secure the cyber environment from this type of activity
(Prevention).

The possibility of identifying this specific attack can also enable security experts to
determine which particular resource is likely to be compromised within the system
(Protection). Among its functions, Cowrie also logs every action that an attacker per-
forms from connect to disconnect (Barron and Nikiforakis, 2017). The logs in our hon-
eypot experiment indicated that an attacker logged on and the commands executed by
the attacker (McCaughey, 2017). This functionality enables cybersecurity experts to get a
better understanding of the type of attacks the intruder attempted to perform within the
honeypot environment, the attacker’s success or failure rate, and also the geographical
location of the IP from which the attack started (McCaughey, 2017). This information can
be used to identify and mitigate future cyber threats (Mitigation). Additionally, a hon-
eypot like Cowrie can analyze a variety of malicious behaviors on the system without
affecting other operations, allowing cybersecurity experts to develop incident response

4 A medium-interaction honeypot offers attackers more ability to run commands and execute
operations than does a low-interaction honeypot, but has less functionality than does a high-
interaction honeypot, which can deal with more possible attack modes.

5 Initially, no system ports were open, but after the start of the honeypot, we observed that port
2222 was open.

446 RISK MANAGEMENT AND INSURANCE REVIEW

strategies (Response). Finally, storing and analyzing captured data logs is particularly
important as it aids recovery efforts related to this attack (Recovery).

Since the purpose of research honeypots is to gather large amounts of general informa-
tion about attack strategies and techniques, they are typically placed outside the security
perimeter. Research honeypots are not suitable for a corporate network as they are too
exposed and should only be implemented for research purposes, not for cyber protec-
tion of a specific organization. In practice, the use of a research honeypot is not feasible
within a corporate environment. Research honeypots are difficult to deploy and typi-
cally simulate the whole operating system. The complex and powerful nature of research
honeypots is suitable for scrutinizing attacks and developing general countermeasures
against threats (Lakhani, 2003; Nawrocki et al., 2016), not for protecting specific assets,
which is the main goal in the corporate environment.

ADVANTAGES OF USING HONEYPOTS
The proactive cyber defense solution proposed in this article can mitigate the informa-
tion advantage of cyber attackers and advance cyber threat intelligence. Cyber risks
cannot be managed using traditional risk management tools only. Honeypots can be
adopted by a broad range of organizations from large to small and from simple to
complex. The major benefits of honeypots are increased control during attacks, learn-
ing from incident response results, protecting the actual system while learning about
attackers’ techniques in a risk-free manner, reducing false positives during diagnostic
testing, alerting administrators of any hostile actions before the real system gets attacked
or damaged, allowing experts to perform studies and develop statistics on the types of
malware employed by attackers, diversifying easily configurable environments through
the usage of various technologies integrated with the honeypot, and analyzing and con-
trolling malicious actions in real time, which allow earlier detection to reduce the overall
cost of the intrusion. In a well-designed honeypot, intruders spend effort attacking the
honeypot instead of the actual system. In addition, a honeypot can produce forensic
evidence that is admissible in a court of law and can detect both insider and outsider
attacks (Sumner, 2002).

Governments, agencies, computer emergency response teams (CERTs), and critical
infrastructure companies might especially benefit from honeypot implementation
(Grudziecki et al., 2012). According to Ashford (2015), these users represent the most
vulnerable subjects as they are frequently targeted by cyber attacks aimed at compromis-
ing equipment or destroying information. These categories of users can use this model
to enhance cyber intrusion detection and incident handling procedures.

CHALLENGES AND RISKS OF USING HONEYPOTS
As with any relatively new technology, there are a few challenging factors and
recommendations linked to honeypots (Spitzner, 2004). One aspect related to the
implementation of honeypots is making a comparison of the different honeypot types
within an organization’s IT infrastructure; it is important to consider various aspects
such as installation, fingerprinting, data value, maintenance, and risks. Therefore,
before starting the deployment of the honeypot product, it is necessary to establish the
budget—the user may decide to create a high-interaction honeypot with physical ma-
chines or a low-interaction honeypot using existing resources and a virtualized system.

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 447

The cyber risk management team must be aware that the deployment of honeypots
requires appropriate attention—the creation of a virtual “defenseless” system may
enable attackers to use it as a “bridge” to conduct more attacks or to access the corporate
network.

If honeypots are not well protected, an attacker could use them to attack other systems.
A sophisticated attacker that figures out the existence of the honeypot could turn the
honeypot into a weapon to attack other systems. This weakness could also have legal
consequences due to liability issues. At times, liability for harm to a third party, whether
caused by the honeypot owner or the attacker, can be hard to determine. Harrington
(2014) highlights a number of legal and liability issues arising from the use of honey-
pots. The author explores several risky scenarios, including a compromised honeypot,
the unintended consequences involving the company that deployed a honeypot, an im-
properly configured honeypot, and other legal situations that could lead to substantial
damages to brand image and reputation, and possibly court sanctions. Sokol et al. (2017)
investigate a number of technical and legal issues related to honeypots with a particu-
lar focus on the protection of privacy and personal data under current and upcoming
European Union legal frameworks. Scottberg et al. (2002) discuss the fine line that can
exist between a honeypot used as protection versus honeypot used for entrapment.

CONCLUSION
Originally, honeypots were created in response to specific security issues, but have
evolved to become a general cybersecurity technology. Honeypots have been applied
in a variety of applications to address a wide range of vulnerabilities, but with many
potential avenues unexplored. For example, the literature review in this article reveals
no attempts to apply honeypots within an overall cyber risk management process.
This manuscript proposes integrating the honeypot technology as a multilayer risk
management solution with a proactive focus on threats. Generally, honeypots are used
for intrusion detection, simulation, mitigation, defense, or pure research, but broader
use as part of risk management strategy can be an effective method for enterprises to
deal with the dynamic nature of cyber risk. Every attack involves an exploratory phase
by the intruder, which provides cybersecurity experts the opportunity to use honeypots
as a learning tool to obtain more precise information, identify and respond to attacks
more quickly, and implement a comprehensive approach.

Cyber risks are an emerging threat that cannot be handled by traditional risk manage-
ment methods. Such risks are typically managed by siloed IT departments instead of by
a cross-functional risk management approach. This manuscript argues that a honeypot
technique integrated into the five FEMA preparedness areas can be effective in reduc-
ing the information advantage of adaptive cyber criminals. The proposed approach can
greatly assist in cyber disaster preparedness and mitigation of cyber risks. Generally, the
best defenses against cyber incidents are an effective set of plans and best practices. If
properly installed, configured, and maintained, honeypots can add value by proactively
learning about attackers, allowing an early response, and maintaining high security
levels of a networked system beyond defenses typically used.

Risk managers need to understand cyber risks and bring cybersecurity under the holis-
tic enterprise risk management (ERM) umbrella (McShane, 2018). Collaboration across
disciplines is essential for future cyber risk research. A potential research area involving

448 RISK MANAGEMENT AND INSURANCE REVIEW

cross-disciplinary collaboration is to investigate the relation between general risk man-
agement frameworks/standards, such as ISO 31000, and the more specialist IT/cyber
risk frameworks/models such as ISO 27000, NIST, RiskIT, COBIT, and (ISC).2 Another
potential research collaboration across disciplines is the effect on cyber insurance pre-
miums if an organization implements proactive cyber defense measures. A study of the
effect of evolving cyber regulation around the globe on risk management is another
important area that researchers should consider.

A honeypot can be a very useful tool with multiple practical applications in the cyber
risk management field. The solution proposed in this work is a good starting point
for future research that employs higher interaction honeypots that are more similar
to the actual system. These types of honeypots can be used to better understand in-
truder motives and actions but are riskier to the organization. Research that improves
and optimizes this risk/return trade-off would be beneficial. A correct correspondence
between the honeypot’s components and the actual system and implementation into
a risk management approach can produce several advantages, such as more effective
workload distribution and lower costs resulting from a reduction in the usage of those
resources that are usually employed in attack detection, defense, and counterattack
processes.

Another potential research topic is the implementation of both research and production
honeypots into corporate networks. Currently, research honeypots are not designed for
use for corporate cyber risk management. A hybrid solution that combines the advan-
tages of both research and production honeypots could be theoretically possible, but
is practically difficult and a challenge that could be tackled in future research. Research
honeypots have been used by academics, resulting in journal articles, but research on
actual corporate honeypots is limited, most likely for corporate confidentiality reasons.
An academic survey and discussion of corporate production honeypot usage would
provide useful information. A cost/benefit analysis of using honeypots to improve
corporate risk assessment is another much needed research avenue.

Honeypots have a high potential for advancing cybersecurity (Zakaria and Kiah, 2012).
By taking advantage of their “attractive” nature, it would be interesting to make them
more intelligent and self-manageable through the integration of an artificial intelligence
(AI) mechanism. For example, intelligent honeypots may include devices that are able
to react autonomously to cyber attacks with repressive or defensive actions (Zakaria
and Kiah, 2012). Finally, honeypots represent an interesting research field since they
are still largely unexplored. Today, even though many countries continue to focus on
providing physical security protections, many governments have recently started to
consider the core cyber infrastructure from a comprehensive point of view.

REFERENCES
Allianz, 2015, Allianz Risk Barometer 2016—Top Risks in Focus: Cyber Incidents. Re-

trieved from http://www.agcs.allianz.com/insights/expert-risk-articles/top-risks-
in-focus-cyber-incidents/

Ashford, W., 2015, Critical Infrastructure Commonly Hit by Destructive Cyber Attacks,
Survey Reveals, ComputerWeekly.com. Retrieved from http://www.computerweekly.
com/news/4500243886/Critical-infrastructure-commonly-hit-by-destructive-cyber-
attacks-survey-reveals

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 449

Barron, T., and N. Nikiforakis, 2017, Picky Attackers: Quantifying the Role of System
Properties on Intruder Behavior, in: Proceedings of the 33rd Annual Computer Secu-
rity Applications Conference (ACM), pp. 387-398.

Bailey, M., E. Cooke, D. Watson, F. Jahanian, and N. Provos, 2004, A Hybrid Honey-
pot Architecture for Scalable Network Monitoring, Technical Report CSE-TR-499-04,
University of Michigan, Ann Arbor.

Biener, C., M. Eling, and J. H. Wirfs, 2015, Insurability of Cyber Risk: An Empirical
Analysis, Geneva Papers on Risk and Insurance Issues and Practice, 40(1): 131-158.

Böhme, R., 2005, Cyber-insurance Revisited, in: Proceedings of the Fourth Workshop
on the Economics of Information Security (WEIS 2005) (Cambridge, MA: Harvard
University Press).

Bullock, J., G. Haddow, and D. P. Coppola, 2017, Homeland Security: The Essentials
(Oxford, UK: Butterworth-Heinemann).

Cheswick, B., 1992, An Evening With Berferd in Which a Cracker Is Lured, En-
dured, and Studied, in: Proceedings of Winter USENIX Conference, San Francisco,
pp. 20-24.

Collins, R. M. L., 2017, Proactive Cybersecurity Through Active Cyber Defense,
Doctoral Dissertation, Utica College. Retrieved from https://search.proquest.com/
openview/d62179a2e3960662b9fa17c9324b2b39/1?pq-origsite=gscholar&cbl=
18750&diss=y

Combs, C., 2013, Terrorism in the Twenty-first Century, 7th edition (Boston, MA: Pearson).
Dacier, M., F. Pouget, and H. Debar, 2004, Honeypots: Practical Means to Validate

Malicious Fault Assumptions, in: Dependable Computing, Proceedings of the 10th
IEEE Pacific Rim International Symposium, pp. 383-388.

Department of Homeland Security, 2014, National Protection Framework. Re-
trieved from https://www.fema.gov/media-library-data/1406717583765-996837bf7
88e20e977eb5079f4174240/FINAL_National_Protection_Framework_20140729

Dominguez, A., 2017, Understanding the Use of Honey Technologies Today, SANS Insti-
tute. Retrieved from https://www.sans.org/reading-room/whitepapers/detection/
state-honeypots-understanding-honey-technologies-today-38165

Douligeris, C., and A. Mitrokotsa, 2004, DDoS Attacks and Defense Mechanisms: Clas-
sification and State-of-the-art, Computer Networks, 44(5): 643-666.

Eling, M., and N. Loperfido, 2017, Data Breaches: Goodness of Fit, Pricing, and Risk
Measurement, Insurance: Mathematics and Economics, 75: 126-136.

Eling, M., and W. Schnell, 2016, What Do We Know About Cyber Risk and Cyber Risk
Insurance? Journal of Risk Finance, 17(5): 474-491.

FEMA, 2016, Mission Areas. Retrieved from https://www.fema.gov/mission-areas.
Gatzlaff, K. M., and K. A. McCullough, 2010, The Effect of Data Breaches on Shareholder

Wealth, Risk Management and Insurance Review, 13(1): 61-83.
Grudziecki, T., P. Jacewicz, L. Juszczyk, P. Kijewski, and P. Pawliński, 2012, Proactive

Detection of Security Incidents, ENISA Report. Retrieved from https://www.enisa.
europa.eu/activities/cert/support/proactive-detection/proactive-detection-of-
security-incidents-II-honeypots

450 RISK MANAGEMENT AND INSURANCE REVIEW

Gutierrez, M., and C. Kiekintveld, 2017, Adapting With Honeypot Configurations to
Detect Evolving Exploits, in: Proceedings of the 16th Conference on Autonomous
Agents and MultiAgent Systems (International Foundation for Autonomous Agents
and Multiagent Systems), pp. 1565-1567.

Harrington, S. L., 2014, Cyber Security Active Defense: Playing With Fire or Sound Risk
Management? Richmond Journal of Law and Technology, 20(4): 1-41.

Harrison, J., 2003, Honeypots: The Sweet Spot in Network Security, Computerworld.com.
Retrieved from http://www.computerworld.com/article/2573345/security0/
honeypots-the-sweet-spot-in-network-security.html

Hofmann, A., 2007, Internalizing Externalities of Loss Prevention Through Insurance
Monopoly: An Analysis of Interdependent Risks, Geneva Risk and Insurance Review,
32(1): 91-111.

Hofmann, A., and H. Ramaj, 2011, Interdependent Risk Networks: The Threat of Cyber
Attack, International Journal of Management and Decision Making, 11(5-6): 312-323.

Hoopes, J., 2009, Virtualization for Security (Burlington, MA: Syngress Pub.).
Hovav, A., and J. D’Arcy, 2003, The Impact of Denial-of-Service Attack Announcements

on the Market Value of Firms, Risk Management and Insurance Review, 6(2): 97-121.
IBM/Ponemon, 2017, Cost of Data Breach Study Global Overview. Retrieved from

https://www.ibm.com/security/data-breach/
Institute for Critical Infrastructure Technology, 2016, Hacking Healthcare IT in 2016,

Icitech.org. Retrieved from http://icitech.org/wp-content/uploads/2016/01/ICIT-
Brief-Hacking-Healthcare-IT-in-2016

Kebande, V. R., N. M. Karie, and H. S. Venter, 2016, A Generic Digital Forensic Readiness
Model for BYOD using Honeypot Technology, in: IST-Africa Week Conference (IEEE),
pp. 1-12.

Khattab, S., R. Melhem, D. Mossé, and T. Znati, 2006, Honeypot Back-Propagation for
Mitigating Spoofing Distributed Denial-of-Service Attacks, Journal of Parallel and Dis-
tributed Computing, 66(9): 1152-1164.

Kuwatly, I., M. Sraj, Z. Al-Masri, and H. Artail, 2004, A Dynamic Honeypot Design
for Intrusion Detection, in: Proceedings of IEEE/ACS International Conference on
Pervasive Services, pp. 95-104.

Lakhani, A. D., 2003, Deception Techniques Using Honeypots (UK: University of London).
Retrieved from http://www.isg.rhul.ac.uk/˜pnai166/thesis

Litchfield, S., D. Formby, J. Rogers, S. Meliopoulos, and R. Beyah, 2016, Rethinking the
Honeypot for Cyber-Physical Systems, IEEE Internet Computing, 20(5): 9-17.

Mairh, A., D. Barik, K. Verma, and D. Jena, 2011, Honeypot in Network Security: A
Survey, in: Proceedings of the 2011 International Conference on Communication,
Computing & Security (ACM), pp. 600-605.

Markets and Markets, 2017, Deception Technology Market Worth 2.09 Billion
USD by 2021. Retrieved from https://www.marketsandmarkets.com/PressReleases/
deception-technology.asp

Marotta, A., F. Martinelli, S. Nanni, A. Orlando, and A. Yautsiukhin, 2017, Cyber-
insurance Survey, Computer Science Review, 24: 35-61.

A HOLISTIC CYBER RISK MANAGEMENT APPROACH 451

McCaughey, R. J., 2017, Deception Using an SSH Honeypot, Doctoral Dissertation,
Naval Postgraduate School, Monterey, CA. Retrieved from https://calhoun.nps.edu/
bitstream/handle/10945/56156/17Sep_McCaughey_Ryan ?sequence=1

McShane, M. K., 2018, Enterprise Risk Management: History and a Design-Science Pro-
posal, Journal of Risk Finance, 19(2): 137-153.

McShane, M. K., T. Nguyen, and N. Xu, 2018, Time Varying Effects of Cyberattacks on
Firm Value, Old Dominion University Working Paper.

Meenakshi, K., and M. Nalini Sri, 2013, Protection Method Against Unauthorised Issues
in Network Honeypots, International Journal of Computer Trends and Technology, 4(4):
655-659.

Mokube, I., and M. Adams, 2007, Honeypots: Concepts, Approaches, Challenges, in:
Proceedings of the 45th Annual Southeast Regional Conference (ACM), pp. 321-326.

Moore, C., 2016, Detecting Ransomware with Honeypot Techniques, in: Cybersecurity
and Cyberforensics Conference (IEEE), pp. 77-81.

Moshiri, M., 2015, 3 Steps for Timely Cyber Intrusion Detection, Securitymagazine.com.
Retrieved from http://www.securitymagazine.com/articles/86604-steps-for-timely-
cyber-intrusion-detection

Nawrocki, M., M. Wählisch, T. C. Schmidt, C. Keil, and J. Schönfelder, 2016, A Survey
on Honeypot Software and Data Analysis, arXiv preprint:1608.06249. Retrieved from
https://arxiv.org/pdf/1608.06249

NIST, 2018, An Introduction to the Components of the Framework. Retrieved from
https://www.nist.gov/cyberframework/online-learning/components-framework

Ogut, H., N. Menon, and S. Raghunathan, 2005, Cyber Insurance and IT Security Invest-
ment: Impact of Interdependent Risk, in: Proceedings of the Fourth Workshop on the
Economics of Information Security (WEIS 2005) (Cambridge, MA: Harvard University
Press).

Oosterhof, M., 2016, Cowrie SSH/Telnet Honeypot. Retrieved from https://github.
com/micheloosterhof/cowrie

Paradise, A., A. Shabtai, R. Puzis, A. Elyashar, Y. Elovici, M. Roshandel, M., and C. Peylo,
2017, Creation and Management of Social Network Honeypots for Detecting Targeted
Cyber Attacks, IEEE Transactions on Computational Social Systems, 4(3): 65-79.

Paté-Cornell, M. E., M. Kuypers, M. Smith, and P. Keller, 2017, Cyber Risk Manage-
ment for Critical Infrastructure: A Risk Analysis Model and Three Case Studies, Risk
Analysis, 38(2): 226–241.

Pawar, A., S. Bhise, K. Siddhabhati, and S. Tamhane, 2014, Web Application Honeypot.
International Journal of Advanced Research in Computer Science and Management Studies,
2(3): 410-416.

Sadasivam, K., B. Samudrala, and T. A. Yang, 2005, Design of Network Security Projects
Using Honeypots, Journal of Computing Sciences in Colleges, 20(4): 282-293.

Scibelli, G., 2015, New Ponemon Institute Study Reveals That Improving Cyber Re-
silience Is Critical for Prevailing Against Rising Cyber Threats, Businesswire.com.
Retrieved from http://www.businesswire.com/news/home/20150918005579/en/
Ponemon-Institute-Study-Reveals-Improving-Cyber-Resilience

452 RISK MANAGEMENT AND INSURANCE REVIEW

Scottberg, B., W. Yurcik, and D. Doss, 2002, Internet Honeypots: Protection or Entrap-
ment? in: Proceedings of the IEEE International Symposium on Technology and Soci-
ety (ISTAS), pp. 387-391.

Shetty, S., M. McShane, L. Zhang, J. P. Kesan, C. A. Kamhoua, K. Kwiat, and L.L. Njilla,
2018, Reducing Informational Disadvantages to Improve Cyber Risk Management,
Geneva Papers on Risk and Insurance-Issues and Practice, 43(2): 224-238.

Sokol, P., J. Mı́šek, and M. Husák, 2017, Honeypots and Honeynets: Issues of Privacy,
EURASIP Journal on Information Security, 4: 1-9.

Solomon, M., and M. Chapple, 2005, Information Security Illuminated (Sudbury, MA: Jones
and Bartlett).

Spitzner, L., 2002a, Definitions and Values of Honeypots, EETimes. Retrieved from
http://www.eetimes.com/document.asp?doc_id = 1255091

Spitzner, L., 2002b, Honeypots: Tracking Hackers (Boston, MA: Addison-Wesley).
Spitzner, L., 2003, Honeypots: Simple, Cost-Effective Detection, Symantec.com. Re-

trieved from http://www.symantec.com/connect/articles/honeypots-simple-cost-
effective-detection

Spitzner, L., 2004, Problems and Challenges With Honeypots, Symantec.com. Retrieved
from http://www.symantec.com/connect/articles/problems-and-challenges-
honeypots

Stoll, C., 1989, The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage
(New York: Doubleday).

Sumner, K., 2002, Honeypots Security on Offense. Retrieved from http://sumnerk.
tripod.com/mywebsite/courses/secarch/honeypotpaper

Tzu, S., 1971, The Art of War, Translated and with an introduction by Samuel B. Griffith
(London, UK: Oxford).

Valli, C., 2007, Honeypot Technologies and Their Applicability as a Strategic Inter-
nal Countermeasure, International Journal of Information and Computer Security, 1(4):
430-436.

Wang, K., M. Du, S. Maharjan, and Y. Sun, 2017, Strategic Honeypot Game Model for
Distributed Denial of Service Attacks in the Smart Grid, IEEE Transactions on Smart
Grid, 8(5): 2474-2482.

Weiler, N., 2002, Honeypots for Distributed Denial-of-Service Attacks, in: Proceedings of
the Eleventh IEEE International Workshops on Enabling Technologies: Infrastructure
for Collaborative Enterprises (WETICE’02), pp. 109-114.

Whitman, M., and H. Mattord, 2007, Principles of Incident Response and Disaster Recovery
(Boston, MA: Thomson Course Technology).

Zakaria, W. Z. A., and M. L. Kiah, 2012, A Review on Artificial Intelligence Techniques for
Developing Intelligent Honeypot, in: Proceedings of the 3rd International Conference
on Next Generation Information Technology, pp. 696-701.

Copyright of Risk Management & Insurance Review is the property of Wiley-Blackwell and
its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.

Risk Management and Insurance Review

C© Risk Management and Insurance Review, 2018, Vol. 21, No. 3,

413

-433
DOI: 10.1111/rmir.121

10

FEATURE ARTICLE

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON
INSURERS AND THE STATE: AN INITIAL ASSESSMENT
Martin F. Grace
Juliann Ping

ABSTRACT

This article explores the impacts of new auto technologies and their financial
effects on insurance markets, a set of complementary services, and state rev-
enues. We use data from the National Association of Insurance Commissioners,
the National Highway Traffic Safety Administration’s Fatality Analysis Report-
ing System, the Bureau of Justice Statistics, and the Census Bureau to create
a data set that links industry and state finance variables to a set of variables
related to driving. Our purpose in this initial assessment is to estimate the sen-
sitivity of these financial variables to different indices of driving including the
number of drivers, the number of cars licensed per year, and the number of
vehicle miles driven. The resulting estimates are used to create elasticities to
show how sensitive each is to changes brought about by the new technologies.

INTRODUCTION
One of the most salient social risks, the risk of automobile crashes, is predicted to
change with the introduction of new driverless or autonomous technologies. Also, other
benefits associated with of driverless technologies may also reduce other costs associated
with driving such as its associated pollution, the demand for oil, and the widespread
productivity losses due to both traffic congestion and crashes. This article attempts to
document the effect of driverless technologies on insurance markets specifically as well
as state revenues and services related to automobile insurance. As a first endeavor, we
try to analyze the macro effects of a reduction in driving activity and its corresponding
impact on losses and other types of accident-related expenditures.

The United States experiences a significant cost due to auto crashes. A National High-
way Traffic Safety Administration (NHTSA) report (2015) estimates the cost of driving
crashes to be about $836 billion in 2010 (in 2018 dollars, $960 billion), which—in addition
to the deaths, injuries, and property damages—also includes costs due to pollution, con-
gestion, and reductions in quality of life. One of the reasons autonomous vehicles are so

Martin F. Grace is the Harry Cochran Professor of Risk Management at Fox School of Business,
Temple University, Philadelphia, Pennsylvania; e-mail: martin.grace@temple.edu. Juliann Ping
is a research assistant in the Department of Risk, Insurance and Healthcare Management at Fox
School of Business, Temple University, Philadelphia, Pennsylvania.

413

414 RISK MANAGEMENT AND INSURANCE REVIEW

interesting is because of their potential for significantly reducing these costs. Evidence
that even the lowest level of automation, so-called Level 1 automation, which implies
one automatic activity (like automatic braking systems [ABS], blind spot monitoring,
lane departure warning, or forward collision warning) has reduced crashes.1

Manufacturers claim that self-driving cars will be significantly safer than human-driven
cars as driverless technology will allow for more precise driving and quicker deci-
sion making. This increase in safety potential reduces the propensity for auto crashes
(Litman, 2014). However, self-driving cars in combination with human-driven cars on
today’s public roads may temporarily hinder the ideal prospects of a driverless society.
Conjecture exists that most self-driving cars will produce lower noxious emissions as
the cars will be designed as lightweight, two-passenger vehicles (Burns, 2013). Further,
these cars could be 10 times more energy efficient than today’s typical car (Burns, 2013).
Additionally, since one need not “drive” a self-driving car, the opportunity cost of transit
will be diminished (Frisoni et al., 2016). Driverless technology thus becomes an attractive
opportunity for automakers and consumers alike.

By utilizing the Society of Automotive Engineers (2016) international levels and defi-
nitions of driving automation, we can approach the projections of autonomous driving
with more uniformity and clarity. The levels are as follows:

1. Level 1: driver assistance,

2. Level 2: partial automation,

3. Level 3: conditional automation,

4. Level 4: high automation,

5. Level 5: full automation.

Different projections have been announced by various vehicle and auto parts manufac-
turers on their products and plans. Table 1 illustrates the level of automation that each
manufacturer expects to release in the form of a fleet of cars for either taxis or commercial
sale.

As seen in Table 1, the majority of manufacturers estimate their releases of Level 4 vehicle
technology to be by 2020. Waymo, the division of Alphabet, has already released a fleet
of autonomous cars without safety drivers for testing in the Phoenix, Arizona metro area
(Ohnsman, 2017). Levels 1 and 2 are being used in vehicles today. These technologies
range from ABS to lane monitoring to unassisted parking. Level 3 represents a car that
the driver can shift certain functions to the vehicle to carry out but is still able to take
over if needed.

Levels 4 and 5 have significant automation capabilities, and the difference between
them lies in the fact that Level 5 automation requires self-driving cars to be reliable in
all driving conditions (i.e., bad weather or a rural environment). Before cars advance

1 ABS, for example, while not effective in reducing fatal crashes, reduce nonfatal crashes by 6-8
percent (NHTSA, 2009). See also Harper et al. (2016) who conclude that these Level 1 technologies
could reduce fatal crashes by over 10,000 per year.

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 41

5

TABLE 1
Automation Level Projections According to Manufacturers

Year 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030

Audi 2 3 3 3.5 3.5 3.5 3.5 3.5 3.5 3.5 4 4 4 4

Daimler/Uber 4 4 4 4 4 5 5 5 5 5 5

Delphi/MobilEye 4 4 4 4 4 4 4 4 4 4 4 4

Ford/Lyft 4 4 4 4 4 4 4 4 4 4 4

General Motors 4

Hondaa 2 2 2 3 3 3 3 3 3 3 3 3 3 3

Hyundai 2 4 4 4 4 4 4 4 4 4 4 4.5

Kia 2 2 2 3 3 3 3 3 3 3 3 3 3 4

Mercedes-Benz 3

Nissan 3 3 3 4 4 4 4 4 5 5 5 5 5 5

NuTonomy (Delphi) 4 4 4 4

Nvidia 5 5 5 5 5 5 5 5 5

Otto (Uber) 5 5 5 5

Tesla 3 4

Toyota 3 3 3 3 3 4 4 4 4 4 4

Volvo/Uber 4 4 4 4 4.5

Source: Jaynes (2016), Kessler (2017), Khalid (2017), Kubota (2015), McFarland (2016), Payne (2017),
Ron (2017), Ross (2017), Valdes-Dapena (2017), Walker (2017), Yu, Kim, and Ananthraraman (2017),
Ziegler (2016), and Zimmer (2016).
aHonda estimated that Honda vehicles would experience no crashes by 2040.

to the Level 5 technology standard, we can at least expect that Level 4 technology will
be increasingly utilized in densely populated cities and preprogrammed routes through
large fleets and limited navigation routes.

Widespread implementation of self-driving vehicles into the market will likely be limited
due to initial high costs, slow fleet turnover (cars currently on the road), and design of
safety requirements (Litman, 2014) and the actual implementation of these requirements
(NCOIL, 2017). Further, any fatal accidents caused by experimentation like that of the
experimental Uber car in the spring of 2018 may cause temporary halts to technological
experimentation until immediate safety concerns are met. Together, this creates a poten-
tially significant cost increase and a steep learning curve to the large-scale adoption of
autonomous vehicles by everyday consumers.

While the costs of implementation are significant, some markets are directly connected
to the growth of the use of self-driving vehicles. Arguably, self-driving cars will be safer
and less expensive to insure. Google claimed that its self-driving Waymo will cut U.S.
auto crashes and deaths by 90 percent (Poczter and Jankovic, 2014).

Auto insurers will see a decrease in claim payouts, and there is a suggestion that we
can expect premiums to drop significantly to as low as 90 percent of today’s typical

416 RISK MANAGEMENT AND INSURANCE REVIEW

car insurance premium (Poczter and Jankovic, 2014). Because insurance is typically a
“cost + markup” business, the reduction in costs will reduce the total profitability of
auto-related insurance.

Other industries will likely be affected. The healthcare industry, for example, could lose
patients and revenue because of the decrease in crashes promised by the increase in
safety of the new vehicles. However, there could be more frequent patient travel as
travel becomes affordable for patients who currently may miss appointments due to
cost, inability to drive, or a failure to secure a reliable driver for appointments. This will
save an estimated $19 billion in missed appointment costs (Bits and Atoms, 2017). Also,
at the same time, there will be significant gains in social welfare if fatalities and injuries
are avoided. This translates into higher income for society.

State governments will experience a decrease in traffic tickets and fines. Thereby, there
will be less demand for highway patrol officers. If advanced traffic management is im-
plemented (through cloud computing), there will be no need for traffic lights, parking
meters, or other utilities. Regarding public transit, paratransit may be reconstructed to
diminish the current large operating deficits as autonomous vehicles better serve pas-
senger needs (Bits and Atoms, 2017). Further, ride sharing and autonomous technologies
may substitute for regional transportation systems.

Urban property values could also decline as a result of the fall in demand of city parking
lots, which will lead to more city space and less commercial revenue attributed to parking
(Poczter and Jankovic, 2014).

The transition from today’s vehicles to self-driving vehicles could reduce our reliance
on oil. To the extent that self-driving cars will be more energy efficient, induce more car
sharing, or use electric technologies, gas tax revenues could drop. While the demand for
roads will not likely change, policymakers may have to find another source for highway
funds. For example, the State of Georgia adopted an annual license fee for electric cars
that was supposed to replace the lost revenue from the gas tax.2

Autonomous vehicles will also likely prove a detriment to employment in the taxi and
commercial trucking industry. Self-driving taxis will replace the need for taxi drivers and,
self-driving truck fleets will replace the need for truck drivers. The trucking industry has
had a historical labor shortage for good drivers and this technology could solve one of its
most vexing problems and increase efficiencies. Trucks are expected to be at the forefront
of widespread fleet conversion to driverless technology, which is then to be followed
by taxis (McKinsey, 2016). Additionally, McKinsey (2016) predicts that one-third of new
trucks sold worldwide will be automated by 2025: a huge threat to the employability of
truck drivers. Also, the use of autonomous farm implements in agriculture will change
(even more) the economics of farming.

Because of the gradual introduction of new technology, there will be displacements in
the insurance industry and other industries. This article primarily focuses on how the
substitution of current technologies might affect the insurance industry as well as some
complements to the provision of auto insurance.

2 The tax is $200 per year for private passenger autos and $300 per year for commercial vehicles.
See, for example, https://www.afdc.energy.gov/laws/11602.

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 417

FIGURE 1
Alcohol Fatalities and All Fatalities Over Time
[Color figure can be viewed at wileyonlinelibrary.com]

Source: NHTSA, FARS.

The article proceeds as follows. First, we examine the current trends in fatalities, injuries,
safety-related expenditures, health and hospital expenditures, and insurance losses. We
then estimate a series of reduced-form models attempting to obtain elasticity estimates
between measures of accident cost and driving activity. We then undertake a rough wel-
fare analysis based on our estimates. Finally, we conclude and present some additional
thoughts about future work.

BACKGROUND ON MARKETS AND SERVICES LIKELY AFFECTED BY DRIVERLESS TECHNOLOGY
NHTSA collects and disseminates a large amount of data regarding traffic crashes,
which we use in our analysis. However, it is illustrative to look at some trends to put our
analysis in context. We first present summary information to provide background on
the estimation of how insurance markets may change with the introduction of driverless
technology.

Figure 1 shows the times series of fatal crashes over the last 15 years. At around 2004,
we see a drop in fatal crashes, but in 2014, there is an unexpected upturn in total
fatalities. This trend is conjectured to be due to distracted driving, and it has been cited
by insurers as a rationale for increasing insurance prices (Scism and Friedman, 2017).
Notably, alcohol-related fatalities are decreasing, and this is likely due to the tremendous
enforcement efforts, new legislation, and public awareness. If driverless cars are going
to be impactful, it will likely reduce fatalities related to alcohol-related car accident,
distracted driving, and accidental causes.

Figure 2 shows the state-level loss ratio over time for the personal lines (auto damage
and liability) as measured by the direct losses incurred divided by the direct premiums
earned for both private passenger and commercial. The loss ratio appears to vary but has
a flat trend for the personal auto damage line of business. Also, if we look at the within-
year variation as measured by the vertical dotted line, there is considerable variance

418 RISK MANAGEMENT AND INSURANCE REVIEW

FIGURE 2
Loss Ratio Over Time
[Color figure can be viewed at wileyonlinelibrary.com]

Source: NAIC Annual Statements.

among the states each year. The vertical dotted lines at each year represent the range
of the standard deviation of the loss ratio (in effect the mean loss ratio +/– 1 standard
deviation) among the states. If we think of the loss ratio as an imperfect measure of
profitability (lower loss ratio implies higher premiums per dollar of loss), we see that
while it goes up and down, the trend over the last two decades (solid blue line) is about
flat.

In contrast, in the auto liability lines of business, we see that there is some variability
over time, but that the time trend in the loss ratio is downward sloping. The loss ratio is
declining (implying higher profits per unit of loss). So, if one makes a simple conjecture,
the auto physical damage line of business is about the same level of profitability as
in the past, but the auto liability business is increasing in profitability. However, this
is somewhat simplistic as the liability line is a relatively long tail line and the eventual
profitability will not be known until sometime in the future. This implies that if driverless
cars do a better job of providing safer transportation, the insurers will lose revenues and
their associated costs from both lines, but the loss of profits from liability lines may
be greater if we make the strong assumptions of holding interest rates and inflation
constant.

We now focus on losses in another dimension. Figure 3 shows the real per capita
losses for auto physical damage and auto liability over time.3 For both lines
(damage and liability), we see declining real per capital losses. This is consistent with a

3 To calculate the real per capita levels shown in the figures and used in the empirical anal-
ysis below, we employ the CPI ALL Items series from the Bureau of Labor Statistics where
1982–1984 = 100.

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 419

FIGURE 3
Real Per Capita Losses Incurred
[Color figure can be viewed at wileyonlinelibrary.com]

Source: NAIC and U.S. Census Bureau.

general decrease in deaths shown in Figure 1. We also see that in the most recent years,
both lines of business experienced increases in real losses which is consistent with the
slight increase in fatalities and injuries, and property damage likely due to distracted
driving.

Figure 4 shows the real per capita hospital (Panel A) and health (Panel B) expendi-
tures by state. In both panels, we see an upward trend over the period from 1996
through 2013. Hospital and health expenditures are increasing for many reasons, per-
haps, in part, because of auto crashes. Reducing the likelihood of crashes could re-
duce the cost of health care, which would also affect the price of health insurance.
Figure 5 shows the trend in the real per capita cost of judicial administration (Panel A)
and police services (Panel B). Both show an upward sloping trend. These costs could
be rising for several reasons, including the cost of administering and policing auto
crashes.

Finally, we come to another possible effect to consider. This would be a disruption
of state insurance tax collections. Most states have premium taxes, which are a gross
receipts tax on the premiums written within the state. In 2016, the property–casualty
industry paid approximately $11 billion in premium taxes to the states (S&P Global
Market Intelligence, Combined Industry, Expense Exhibit, 2016). This represents about
1.25 percent of the average state’s revenue. Further, as we can see in Figure 6, the average
amount of premium taxes to registered vehicles is ranges from about $11 to $16 over the
recent past. This series is likely also driven by general economic conditions, but we can
see that as the premium for auto insurance declines (as the risk of crashes is reduce

d)

there will be some reduction on state revenues.

420 RISK MANAGEMENT AND INSURANCE REVIEW

FIGURE 4
Real Per Capita Health Care Costs
[Color figure can be viewed at wileyonlinelibrary.com]

Source: Area Resource File.

FIGURE 5
Real Per Capita Expenditures on Judicial Administration and Police
[Color figure can be viewed at wileyonlinelibrary.com]

Source: Bureau of Justice Statistics.

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 421

FIGURE 6
Ratio of Total Premium Taxes Collected Per Capita
[Color figure can be viewed at wileyonlinelibrary.com]

5
10

15

20

1995 2000 2005 2010 2015

Taxes Per Capita

A
vg

I
ns

ur
an

ce
T

ax
es

P
er

C
ap

ita
(
S

td
E

rr
or

is
S

ha
de

d)

Source: NAIC and Watanabe (2017).

INITIAL EMPIRICAL ESTIMATES OF THE EFFECT OF NEW TECHNOLOGY
As the current driverless car technology is not sufficient to get a true indicator of the
changes in costs, we resort to a thought experiment. If we look at the effect of measures of
driving on various levels of costs or losses, we can see how sensitive these expenditures
are to changes in driving-related activities. We have three sources of evidence for driving
activity: the number of licensed drivers in a state in a given year, the number of vehicle
miles driven in a state in a given year, and the number of registered vehicles in the
state in a given year (data retrieved from NHTSA). These three variables are highly
correlated, so we use them independently in our analysis below. Because the driving
indices change over time and differ among states, we can assess how each of them affects
our variables of interest and use these differences to predict the effect of a change in
driving on expenditures or losses.

The typical model we employ to obtain our estimates use the following reduced form:

log(yst ) = α + β log(Ist ) + ωZst + ηt + εs + μ.

This is a fixed-effect model where we have time fixed effects (t), state fixed effects (s),
and a random error for each observation. The variable log(y) is a dependent variable of
interest, say direct losses incurred in state (s) and year (t). Ist is an index for the variable
we believe might be related to y. We consider, as our index of driving, the log of the
number of drivers in a state, the log of the number of vehicle miles driven in a state, and

422 RISK MANAGEMENT AND INSURANCE REVIEW

the log of the number of cars in the state. The coefficient β is the effect of the driving
index variable on our variable of interest. In this regression, the coefficient is interpreted
as an elasticity so that a given percentage change in the driving index can be interpreted
as a β percent change on y. For reference proposes, we refer to this style of a model as
a log–log model. This estimated β is what we will report and discuss in the empirical
section below. In turn, Z is a vector of explanatory variables that we obtain that varies
over time and among the states. We estimate numerous versions of these models using
state real per capita income, the percent of miles each year driven on rural highways,
and a variable describing the auto insurance regulatory stringency in the state.4 We also
estimate this model using weighted least squares (WLS) to minimize the effect of the
strong heteroskedasticity within the data. We use, as a weight, the average of the state’s
total vehicles miles driven over the period to mitigate the heteroskedastic effect and then
we use robust standard errors to account for any other unknown heteroskedasticity. We
only show the variables of interest in the tables below, in part because the indices of
driving variables are our primary interest.5

We also employ generalized linear models (GLMs) to obtain our elasticity estimates.
Certain models such as the Poisson or the negative binomial model account for the fact
that the some of the data we employ are count data and are not distributed normally.
We use the negative binomial regression (Cameron and Trivedi, 2010) to estimate a
conditional two-way fixed effects model of the form:

E [yst ] = f (exp(β log(Ist ) + ωZst + ηt + εs + μ)).

In these negative binomial estimations, we calculate the elasticity by examining the
marginal effect of the regression estimates for the variable of interest.

In the tables below, there are two tests of significance mentioned. The first is the tradi-
tional test that the coefficient of interest is different from zero (β = 0). This will provide
us with information regarding the size of the elasticity estimate and its difference from
zero. However, if one thinks about some of these relationships, one could conjecture that
the change in the driving index should be proportional to the reduction in fatalities or
injuries. In this case, the test would be that the coefficient is different from one (β = 1).
Both test results are shown in the tables.6

4 We employ Harrington’s (2002) description of regulatory stringency updated to 2015. We use
this just as a control variable for the insurance related regressions to control for the regulatory
environment within the state. We also use the real per capita income from the Bureau of Economic
Affairs (https://apps.bea.gov/iTable/index_regional.cfm) as another state control variable with
the population estimates from Watanabe’s (2018) curated Census state population estimate series
(https://scholar.harvard.edu/awatanabe/data), and the percent of total miles driven in rural
areas of the states from the Federal Highway Administration’s Highway Statistics Series, table
VM-2 (https://www.fhwa.dot.gov/policyinformation/quickfinddata/qftravel.cfm). In most of
our regressions, we employ state fixed effects to control for other differences between the states.

5 Full regression results are available in Stata log files in pdf format for each table. These are
available as online materials.

6 For the case of whether the coefficient is different from one only those that are different at the
10 percent level or better are indicated by a (†) symbol. This is merely to increase the readability

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 423

TABLE 2
Regression Elasticity Estimates for Types of Loss Events by Driving Index Variables (Ist)

Ist

Dependent Variable (y) Estimation Method Drivers Cars Miles

Panel A

Log total fatalities 2-way FE 0.299 ns 0.054 * 0.662 **

Log total fatalities WLS 2-way FE 0.342 ** 0.117 0.662 0.825 *** a

Total fatalities Conditional NB 2-way FE 0.068 ** 0.029 *** 0.004 ns

Panel B

Log total fatalities/pop 2-way FE 0.057 0.023 0.349

Panel C

Log total fatal crashes 2-way FE 0.496 ** 0.081 ** 0.941 *** a

Log total fatal crashes WLS 2-way FE 1.322 *** 0.098 * 1.233 *** a

Total fatal crashes Conditional NB 2-way FE 0.167 *** 0.002 ns 0.002 ns

Panel D

Log total alcohol-related

fatalities

2-way FE 0.216 ns 0.079 * 0.916 *** a

Log total alcohol-related
fatalities

WLS 2 × FE 0.280 ns 0.104 1.419 *** a

Total alcohol fatalities Conditional NB 2-way FE 0.549 *** 0.000 ns 0.005 *

Panel E

Log total injuries 2-way FE –0.064 ns 0.051 ns 0.767 *** a

Log total injuries WLS 2 × FE 0.008 ns 0.038 ns 1.073 *** a
Total injuries Conditional NB 2-way FE –0.006 ns 0.003 ns 0.043 ***

Note: ns: no significance at reasonable levels. All standard errors are robust.
***Significant at .01 level.
**Significant at .05 level.
*Significant at .10 level.

aTest of hypothesis that β = 1, is not rejected.

Table 2 shows the results of our estimates that focus on fatalities and injuries. The main
question of interest is: does an index measure of driving activity relate to these fatalities

of the table. All estimate results including tests of the coefficient being different from zero are
provided in the online materials.

424 RISK MANAGEMENT AND INSURANCE REVIEW

or injuries? We find that the number of drivers is related (in a statistical sense) to total
fatalities, total fatal crashes, and single and multiple car fatalities.

In Panel A of Table 2, we examine two types of models. The WLS log–log model shows the
estimated elasticity for drivers and total fatalities is 0.342. Thus, a 10 percent reduction in
drivers in the average state yields a 3.42 percent reduction in fatalities. The unweighted
log–log panel estimate is close to the WLS estimated but is just outside standard levels
of statistical significance. Agreement in size suggests some robustness in the estimate.
In contrast, the negative binomial model provides an elasticity estimate slightly lower
than the WLS or log–log model. All three methods provide a relatively consistent point
estimate even though not all are significant. Looking at other indicators of driving, we
see that the number of cars in a state is not associated with fatalities, but that miles
driven are statistically related to fatalities. One might suggest that the miles driven
index is likely to be the most relevant for our analysis as it is the index most related
to driving intensity. The number of licensed cars and drivers is potentially related to
fatalities, but miles driven may be more directly related. We find that the elasticity of
miles driven is generally higher for all models estimated in Table 2, suggesting that
injuries and fatalities are more sensitive to changes in miles driven. For miles driven,
the log–log models (both weighted an unweighted) are higher, suggesting a 10 percent
decrease in miles driven will result in a reduction in fatalities in the range of 6.62–8.25
percent using the point estimates of the elasticities.7 Using our thought experiment,
suppose there is a 10 percent reduction in miles driven by human-driving vehicle (and
replaced with a corresponding “safe” miles driven by a self-driving vehicle). This would
imply a reduction in fatalities of between 2,478 and 3,090 per year.

Panel B of Table 2 provides evidence on a per capita basis using a log–log model.
We see that these estimates are lower but of the same order of magnitude as those in
Panel A but none are significant. Panel C shows the analysis using fatal crashes as a
dependent variable. For miles driven, we see something like a unit elastic result—a 10
percent reduction in miles driven is related to a reduction in crashes by 9.41 percent
to 12.33 percent. One cannot reject the hypothesis that these estimates are different
from one.

Panel D of Table 2 shows the models for fatal alcohol crashes. Again, we see a near unit
elasticity measure for alcohol crashes using fixed effects and WLS log–log models. A 10
percent reduction in miles driven is related to a 9.1–14.2 percent reduction in alcohol-
related fatalities, which translates to a range of deaths avoided of 1,075–1,675. Finally,
in Panel E, we also see a near unit reduction injuries from reducing miles driven. A 10
percent reduction in human-miles driven yields a reduction in injury causing accidents
of 6,940–9,100 accidents per year.

Table 3 shows a similar table describing the relationship between the various indices
of driving activity and some state expenditures on safety and health. We see that our
index variables for the number of miles driven are positively associated with judicial
expenditures, police expenditures, and health expenditures. Of the three expenditures,

7 One final note to discuss is the WLS log–log elasticity estimate is not significantly different from
one. For the weighted regression, we can reject the hypothesis that the coefficient is equal to one
at standard levels of significance (p = 0.186).

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 425

TABLE 3
Regression Elasticity Estimates of Expenditures Related to Indices of Driving

Ist

Dependent Variable Log(y) Estimation Method Drivers Cars Miles

Health expenditures 2 × FE 0.738 nsa –0.111 ns 0.501 ns
WLS 2 × FE 0.978 ** a –0.027 ns 0.518 **

Judicial expenditures 2 × FE 2.760 *** 0.132 ns 0.267 ***
WLS 2 × FE 0.401 ns 0.075 ns 0.455 *

Police expenditures 2 × FE 0.493 nsa 0.089 ns 1.143 ** a
WLS 2 × FE 0.400 *** 0.104 ** 0.725 ***

Note: ns: no significance at reasonable levels. All standard errors are robust.
***Significant at .01 level.
**Significant at .05 level.
*Significant at .10 level.
aTest of hypothesis that β = 1, is not rejected.

TABLE 4
Regression Elasticity Estimates of Various Public Safety Measures

Ist

Dependent Variable Log(y) Method Drivers Cars Miles

EMT employment 2 × FE –0.077 ns –0.118 ns –0.005 ns
WLS 2 × FE 0.100 ns –0.283 *** 0.347 ns

HW patrol expenditures 2 × FE –0.917 ns –0.055 ns 0.021 ns
WLS 2 × FE –0.917 ** –0.080 ns 0.027 ns

Fire fighter employment 2 × FE 0.315 ns 0.077 ns –0.477 ns
WLS 2 × FE 0.691 ns 0.088 ns 0.634 ns

Note: ns: no significance at reasonable levels. All standard errors are robust.
***Significant at .01 level.
**Significant at .05 level.
*Significant at .10 level.

expenditures on police seem most sensitive to changes in miles driven with a 10 percent
reduction in miles driven being associated with a 7.25–11.43 percent reduction in po-
lice expenditures. Health and judicial expenditures are statistically associated with a
reduction in miles driven but are less sensitive.

Other safety-related expenditures are also not related to our driving indices in the hy-
pothesized way. We examine state-level aggregated employment of emergency medical
technicians (EMT), firefighters, and the real state expenditures on the highway patrol.

426 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE 5
Elasticity Estimates for Auto Insurance Direct Losses

Ist
Dependent Variable Log(y) Method Drivers Cars Miles

Real personal auto damages 2 × FE 0.706 *** a 0.070 ns 0.805 ***
WLS 2 × FE 0.851 *** a 0.135 *** 0.820 *** a

Real auto personal liability 2 × FE 0.254 ns 0.037 ns 0.705 ***
WLS 2 × FE 0.293 *** 0.734 *** 0.043 ns

Real auto commercial damages 2 × FE 1.294 *** 0.052 ns 1.377 *** a
WLS 2 × FE 1.680 *** 0.240 *** 1.456 ***

Real auto commercial liability 2 × FE 0.750 *** 0.019 ns 1.300 *** a
WLS 2 × FE 0.856 *** a 0.027 ns 1.348 ***

Note: ns: no significance at reasonable levels. All standard errors are robust.
***Significant at .01 level.
**Significant at .05 level.
*Significant at .10 level.
aTest of hypothesis that β = 1, is not rejected.

As one would expect, these expenditures may relate to many variables other than our
indices of driving activity. We were not able to discern relationships in the hypothesized
direction. This is likely because dealing with automobile crashes, while an important
part of these public safety officials’ jobs, is not important enough to have a statisti-
cal effect on employment or expenditures. Also, we have the smallest amount of data
(4 years) for these series, and there may not be enough variation within the period to be
able to identify relationships.

Table 5 focuses directly on the insurance industry. We examine the effect on real di-
rect losses incurred for personal auto and commercial auto damage and corresponding
liability coverage. Unlike the tables above, the direct losses incurred are more likely to
be directly related to driving activity whether it is the number of drivers, the number
of cars, or the miles driven in a state. Again, if we focus on the measure of intensity-
miles driven, we see near unit elastic relationships. A 10 percent reduction in miles
driven is associated with a 0.80–0.82 percent reduction in real personal auto dam-
ages. The most elastic seems to be real auto commercial damages—a 10 percent re-
duction in miles driven would be associated with a reduction in direct loses between
13.77 and 14.56 percent reduction incurred losses. All the lines of business are rela-
tively sensitive to reductions in the number of drivers compared to miles driven or
the number of cars registered. It is also important to note that the numbers of drivers
are also more likely to be significantly related to the losses incurred for all lines of
business.

Table 6 looks at the insurance premium tax, the gas tax, and tax or fee revenues from
motor vehicle licenses. These results suggest that a 10 percent reduction in miles driven
will yield a 6.33 percent reduction in the premium tax collected for auto liability for

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 427

TABLE 6
Elasticity Estimates of Effect on State Revenues

Ist
Dependent Variable Log(y) Method Drivers Cars Miles

Taxes from auto liability personal 2 × FE 0.314 ns 0.067 ns 0.633 ** a
WLS 2 × FE 0.413 *** 0.602 ns 0.636 ***

Taxes from auto damages personal 2 × FE 0.828 *** a 0.166 * 0.921 *** a
WLS 2 × FE 0.823 *** a 0.150 *** 0.812 ***

Taxes from auto liability commercial 2 × FE 0.515 ns –0.021 ns 0.867 *** a
WLS 2 × FE 0.119 ns –0.002 ns 0.370 ***

Taxes from auto damages commercial 2 × FE 0.824 ** a 0.066 1.277 *** a
WLS 2 × FE 0.863 *** a 0.286 *** 0.872 *** a

Tax revenue from MV licensesb 2 × FE 2.466 *** 0.117 ns 0.327 ***
WLS 2 × FE 2.267 *** 0.127 ns 0.286 ***

Tax revenue from fuelb 2 × FE 0.912 *** 0.009 ns 0.162 ***
WLS 2 × FE 0.967 *** a 0.032 ns 0.156 ***

Note: ns: no significance at reasonable levels. All standard errors are robust.
***Significant at .01 level.
**Significant at .05 level.
*Significant at .10 level.
aTest of hypothesis that β = 1, is not rejected.
bEstimated with year effects only.

personal passenger lines. Consistent with our results in direct losses incurred, commer-
cial policies are slightly more sensitive than the personal auto policies regarding drivers.
For small changes in the number of drivers with driverless car technology, there will
likely be real changes to auto losses as well as premium taxes collected.

The last two taxes shown in Table 6 are revenues from car tags and revenues from the
gas tax. These models were estimated slightly differently from the others as they were
estimated without state fixed effects. There are about 40 states with these taxes, and we
had data for 4 years, so estimating without the state fixed effects as there was not much
variation over time. Tag revenue is very elastic (over 2) for drivers—thus a 10 percent
reduction in licensed drivers is associated with a 24.66 percent reduction in tag revenues.
It is interesting that we do not see this same relationship with registered cars. We do see
a relationship in the 3 percent range for miles driven. For fuel, we seen unit relationships
for drivers and inelastic relationship with miles driven where a 10 percent reduction in
miles driven is associated with a 15.6–16.2 percent reduction in tax revenues from gas
taxes.

In addition to state revenues, insured losses, and public expenditures other industries
may be affected by a reduction in crashes. We obtained data from the Bureau of Labor

428 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE 7
Elasticities From Related Industries

Ist
Dependent Variable Log(y) Method Drivers Cars Miles

Employment in law offices 2 × FE 0.191 ns 0.064 *** 0.512 ***
WLS 2 × FE 0.046 ns 0.033 *** 0.427 ***

Employment in auto repairs 2 × FE 0.195 ns 0.024 ns 0.320 ***
WLS 2 × FE 0.194 *** 0.026 ns 0.320

Employment in auto parts 2 × FE 0.102 ns 0.012 ns 0.264 **
WLS 2 × FE 0.102 ** 0.013 ns 0.265 ***

Employment in places serving alcohol 2 × FE 0.023 ns –0.014 ns 0.458 *
WLS 2 × FE 0.039 ns –0.079 0.395 **

Note: ns: no significance at reasonable levels. All standard errors are robust.
***Significant at .01 level.
**Significant at .05 level.
*Significant at .10 level.

Statistics Quarterly Census of Employment and Wages program. These data are the
underlying data for the unemployment compensation system. It collects for each
location, the wages and employment of workers at the firm and are aggregated to the
NAICS level by state. We obtained employment for the NAICS codes for law offices
(5,411), auto body shops (8,111), auto parts stores (4,231), and places serving alcohol
(7,224). We find that law office employment is sensitive all three driving indices, auto
body repair is sensitive to the number of drivers and the number of miles, and auto parts
employment is not significantly associated with any driving index. These employment
figures, for the case of law offices, include lawyers and their staff. A 10 percent reduction
in drivers yields a 5.12 percent reduction in law office employment. That translates
to approximately 1,100 fewer employees in law offices for the average state. For auto
repairs, we see relatively inelastic results for drives and miles. The same is true for auto
parts. Finally, for employment in places that serve alcohol, the thought experiment is a
bit different. If consumers could go to a bar or restaurant without worrying about driv-
ing home with the risk of a DUI, they might be encouraged to do so. Thus, a 10 percent
increase in miles driven is related to a 3.95–4.58 percent increase in employment in bars
and restaurants serving alcohol. This is still inelastic, but significantly different from
one too.

Finally, one can do a relatively simple welfare analysis from the data that are summa-
rized in Table 8. We assume that self-driving vehicles of Level 4 and 5 technologies are
introduced into the market, and a suitable proportion (say 10 percent) of miles driven are
completed in autonomous vehicles. Thus, if 10 percent fewer miles are driven, based on
our regression estimates from Table 2, total fatalities in the United States will decrease
by 6.62 percent. Using 2016 data on fatalities, this translates to 2,480 lives saved in a
given year. The Department of Transportation uses a value of statistical life estimate of

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 429

TABLE 8
Welfare Analysis of a Reduction of Miles Driven by

10 Percent

Cost Breakdown Related to Motor Vehicle Crashes in

2016

Assuming an Elasticity of 0.662 Between Fatalities and Miles

Driven

Total Fatalities,

2016

Saved Lives

Assuming a

10 Percent

Reduction in

Drivers

37,461 2,480

Breakdown of Costs Total Costs 2016

Estimated Total

Savings 2016

Value of a statistical life from USDOT $9,600,000 $359,625,600,000 $23,807,214,720

Total costs from NHTSA 100% $359,625,600,000 $23,807,214,720

% Medical costs due to injuries 10% $35,962,560,000 $2,380,721,472

% Congestion costs 12% $43,155,072,000 $2,856,865,766

% Property damages 31% $111,483,936,000 $7,380,236,563

% Property-damage-only 30% $107,887,680,000 $7,142,164,416

% Crashes not reported to police 17% $61,136,352,000 $4,047,226,502

Sources of Payment for Motor Vehicle Crash Costs in 2016

Motor vehicle crash costs paid by Total Costs Total Savings

% Federal revenues 4% $14,385,024,000 $952,288,589

% State and localities 3% $10,788,768,000 $714,216,442

% Programs subsidized by public

revenues (Medicare/Medicaid)

1% $3,596,256,000 $238,072,147

% Private insurers 54% $194,197,824,000 $12,855,895,949

% Individual crash victims 23% $82,713,888,000 $5,475,659,386

% Third parties (i.e., motorists delayed,

charities, healthcare providers)

16% $57,540,096,000 $3,809,154,355

Source: NHTSA, The Economic and Societal Impact of Motor Vehicle Crashes, May 2015; Depart-
ment of Transportation, 2016.

$9.6 million (2016) for regulatory and policy decisions, and this yields a welfare savings
of nearly $23.8 billion per year regarding lives saved. We do not have good data (e.g.,
a sufficiently long series) on crashes resulting in nonfatal injuries, but if these injuries
were accounted for, the welfare cost of reduced personal injuries and property damages
is even higher. Our assumption of the relationship between fatalities and miles driven
understates the effect on all crashes as we do not include the costs of nonfatal crashes,
so our estimate is conservative.

430 RISK MANAGEMENT AND INSURANCE REVIEW

We can also look at the incidence of accident costs. NHTSA undertook a very detailed eco-
nomic analysis of the cost of traffic crashes and was able to apportion the costs to various
actors who bear the burden of the losses. Table 8 also shows the distribution of the costs
of crashes among the various actors who bear the costs of crashes. Insurers (auto, health,
life) are the direct payers for most of the costs. However, that is exactly the role insurers
serve in the market. The industry’s total saving from a 10 percent reduction in miles
driven is $12.85 billion per year in the reduction of losses and loss adjustment expenses.

DISCUSSION AND CONCLUSIONS
In this article, we obtain rough estimates of the relationships between expenditures and
losses that are related to automobile crashes. This allowed us to estimate a conservative
welfare savings of $23.8 billion per year assuming a 10 percent reduction in miles driven
on the road. Some of the reduced-form estimates for the relationships between fatalities
and injuries, insurance losses, and tax revenues are likely helpful for policymakers to
understand the initial ramifications of changes in how a state’s population drives and
the kind of autonomous technology that is necessary to reduce the cost of risk of crashes.
This technology will affect many industries, but the insurance and health industries are
likely to bear the largest brunt.

As this a first pass at this exercise, there are likely many improvements to the estimation
approach and the construction of variables of interest. First, we make an implicit
assumption that all safety technologies are perfectly safe. We know that that is not
true. The way we approach our thought experiment is that a 10 percent reduction
in miles driven is associated with an x percent reduction in one of our variables of
interest. We could just add in some friction to adjust for the imperfect safety costs.
However, we have no idea what those cost really are. In this article, we are making an
order of magnitude assessment of how losses or expenditures react to changes in miles
driven.

Second, it would be beneficial to separate commercial drivers from total drivers, com-
mercial vehicles from private vehicles, and other more micro variables that may be
related to emergency services such as ER and ER staffing. A third refinement is one that
we break down healthcare expenditures into accident-related costs, insurance-covered
costs, and taxpayer-covered costs.

Fourth, there is the possibility of disruption in other industries. For example, will we
need high opportunity cost commercial real estate in central cities devoted to parking?
What will be the effect on commercial trucking? Drivers will no longer need time to rest,
and trucks will operate 24 × 7. Also, this new technology may affect employment in
industries with high levels of defined benefit plans penetration. For example, Laughlin
(2017) reports that the Philadelphia, Pennsylvania area transportation utility (SEPTA) is
concerned about the substitute for bus rides for Uber or Lyft rides. Uber wait times have
declined dramatically, and this has caused bus riders to shift to the ride-sharing services.
This has implications for publicly financed transportation networks and their pension
plans. O’Toole (2017) reports that the unfunded liabilities for just the health component
of SEPTA employees’ retirement package is nearly as large as the organization’s current
operating budget. SEPTA is not the only large transportation authority with these issues,
and they are compounded by the backload of repair and maintenance necessary to bring
these systems to an acceptable operating level. O’Toole asserts that three-fourths of the

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 431

loss of riders in 2016 was due to people switching to ride-share services. These public
transport services are also highly subsidized, and the ride-sharing services seem to com-
pete. Autonomous vehicles and ride-sharing could replace portions of these networks,
stranding investment and putting pensions at risk. Thus, while property–casualty in-
surers are likely at the greatest risk of transformation due to driverless technology, other
industries and governments are also likely to see changes in operations and services that
they provide.

REFERENCES
Bits and Atoms, 2017, Taming the Autonomous Vehicle: A Primer for Cities,

Bloomberg Philanthropies and Aspen Institute Center. Retrieved from https://
www.bbhub.io/dotorg/sites/2/2017/05/TamingtheAutonomousVehicleSpreads
PDFreleaseMay3rev2 (accessed August 24, 2018).

Blincoe, L. J., T. R. Miller, E. Zaloshnja, and B. A. Lawrence, 2015, May, The Economic and
Societal Impact of Motor Vehicle Crashes, 2010 (Revised) (Report No. DOT HS 812 013)
(Washington, DC: National Highway Traffic Safety Administration).

Burns, L. D., 2013, Sustainable Mobility: A Vision of our Transport Future, Nature,
497(7448): 181-182.

Cameron, A. C., and Trivedi P. K., 2010, Microeconometrics Using Stata (College Station,
TX: Stata Press).

Frisoni, R., A. Dall’Oglio, C. Nelson, J. Long, C. Vollath, D. Ranghetti, and
S. McMinimy, 2016, Research for TRAN Committee—Self-Piloted Cars: The Fu-
ture of Road Transport? (PE 573.434). Retrieved from http://www.europarl.
europa.eu/RegData/etudes/STUD/2016/573434/IPOL_STU(2016)573434_EN
(accessed August 24, 2018).

Harper, C. D., C. T. Hendrickson, and C. Samaras, 2016, Cost and Benefit Estimates of
Partially-Automated Vehicle Collision Avoidance Technologies, Accident Analysis &
Prevention, 95: 104-115.

Harrington, S., 2002, Effects of Prior Approval Rate Regulation of Auto Insurance, in:
J. D. Cummins, ed., Deregulating Property-Liability Insurance: Restoring Competition and
Increasing Market Efficiency (Washington, DC: Brookings Institute).

Jaynes, N., 2016, Here’s the Timeline for Driverless Cars and the Tech That Will Drive
Them, Mashable. Retrieved from http://mashable.com/2016/08/26/autonomous-
car-timeline-and-tech/#Zer9hyn8pEqB (accessed August 24, 2018).

Kessler, S., 2017, A Timeline of When Self-driving Cars Will be on the Road, According
to the People Making Them, Quartz. Retrieved from https://qz.com/943899/a-
timeline-of-when-self-driving-cars-will-be-on-the-road-according-to-the-people-
making-them/ (accessed August 24, 2018).

Khalid, A. E., 2017, Why Singapore Is a Key Part of NuTonomy’s Strategy
for Driverless Cars, WBUR. Retrieved from http://www.wbur.org/bostonomix/
2017/10/25/delphi-purchase-nutonomy (accessed August 24, 2018).

Kubota, Y., 2015, Toyota Aims to Make Self-Driving Cars by 2020, Wall Street Journal. Re-
trieved from https://www.wsj.com/articles/toyota-aims-to-make-self-driving-cars-
by-2020-1444136396 (accessed August 24, 2018).

432 RISK MANAGEMENT AND INSURANCE REVIEW

Laughlin, J., 2017, As Uber Grows, SEPTA to Rethink Bus Service, Philadel-
phia Inquirer, July 23. Retrieved from http://www.philly.com/philly/business/
transportation/as-uber-grows-septa-to-rethink-bus-service-20170721.html (accessed
August 24, 2018).

Litman, T., 2014, Autonomous Vehicle Implementation Predictions, White Paper, Victoria
Transport Policy Institute.

McFarland, M., 2016, BMW Promises Fully Driverless Cars by 2021, CNN. Re-
trieved from http://money.cnn.com/2016/07/01/technology/bmw-intel-mobileye/
(accessed August 24, 2018).

McKinsey & Company and Bloomberg, 2016, An Integrated Perspective on the
Future of Mobility. Retrieved from https://data.bloomberglp.com/bnef/sites/
14/2016/10/BNEF_McKinsey_The-Future-of-Mobility_11-10-16 (accessed
August 24, 2018).

National Conference of State Legislatures, 2017, Autonomous Vehicles Self-Driving
Vehicles Enacted Legislation. Retrieved from http://www.ncsl.org/research/
transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx
(accessed August 24, 2018).

National Highway Traffic Safety Administration, 2015, The Economic and Soci-
etal Impact of Motor Vehicle Crashes, May. Retrieved from https://crashstats.
nhtsa.dot.gov/Api/Public/ViewPublication/812013 (accessed August 24, 2018).

National Highway Traffic Safety Administration, 2009, The Long Term Effect of ABS
in Passenger Cars and LTVs, DOT-HS 811 182. Retrieved from https://crashstats.
nhtsa.dot.gov/Api/Public/ViewPublication/811182 (accessed August 24, 2018).

Ohnsman, A., 2017, Our Driverless Future Begins as Waymo Transitions to
Robot-Only Chauffeurs, Forbes. Retrieved from https://www.forbes.com/sites/
alanohnsman/2017/11/07/our-driverless-future-begins-waymo-transitions-to-
robot-chauffeurs/#2a3c45a9e7e8 (accessed August 24, 2018).

Payne, C. E., 2017, Driverless Cars—The Race to Level 5 Autonomous Vehicles. Re-
trieved from https://www.engineering.com/DesignerEdge/DesignerEdgeArticles/
ArticleID/15478/Driverless-Cars-The-Race-to-Level-5-Autonomous-Vehicles.aspx
(accessed August 24, 2018).

Poczter, S. L., and L. M. Jankovic, 2014, The Google Car: Driving Toward a Better Future?
Journal of Business Case Studies, 10(1): 7–14.

Ron, L., 2017, The Future of Transportation, MIT Technology Review. Retrieved from
https://events.technologyreview.com/video/watch/lior-ron-otto-future-of-
transportation/

Ross, P. E., 2017, CES 2017: Nvidia and Audi Say They’ll Field a Level 4 Autonomous
Car in Three Years, IEEE Spectrum. Retrieved from https://spectrum.ieee.org/cars-
that-think/transportation/self-driving/nvidia-ceo-announces (accessed August 24,
2018).

Society of Automotive Engineers, 2016, Taxonomy and Definitions for Terms Related
to On-Road Motor Vehicle Automated Driving Systems J3016 201401. Retrieved from
SAE International website: https://www.sae.org/standards/content/j3016_201401/
(accessed August 24, 2018).

DRIVERLESS TECHNOLOGIES AND THEIR EFFECTS ON INSURERS AND THE STATE 433

Scism, L., and N. Freidman, 2017, Smartphone Addicts Behind the Wheel Drive Car
Insurance Rates Higher Wall Street Journal, February 21. Retrieved from https://
www.wsj.com/articles/smartphone-addicts-behind-the-wheel-drive-car-insurance-
rates-higher-1487592007?mg=prod/accounts-wsj (accessed August 24, 2018).

U.S. Department of Transportation, 2016, Revised Departmental Guidance 2016:
Treatment of the Value Preventing Fatalities and Injuries in Preparing Economic
Analysis. Retrieved from USDOT website: https://www.transportation.gov/
sites/dot.gov/files/docs/2016%20Revised%20Value%20of%20a%20Statistical%20
Life%20Guidance (accessed August 24, 2018).

Valdes-Dapena, P., 2017, GM: Self-Driving Cars Are Our Next Big Thing, CNN.
Retrieved from http://money.cnn.com/2017/11/30/technology/gm-autonomous-
cars-2019/index.html (accessed August 24, 2018).

Walker, J., 2017, The Self-Driving Car Timeline—Predictions from the Top 11
Global Automakers. Retrieved from https://www.techemergence.com/self-driving-
car-timeline-themselves-top-11-automakers/ (accessed August 24, 2018).

Watanabe, A., 2018, US Census Bureau State Level Population Estimates, 1990–2016,
Version 1.0. Retrieved from http://scholar.harvard.edu/files/awatanabe/files/us_
state_population_estimate_technical_notev1 (accessed August 20, 2018).

Yu, J. M., M. Kim, and M. Anantharaman, 2017, Chipmaker Nvidia’s CEO Sees
Fully Autonomous Cars Within 4 Years, Reuters. Retrieved from https://
www.reuters.com/article/us-nvidia-ai-chips/chipmaker-nvidias-ceo-sees-fully-
autonomous-cars-within-4-years-idUSKBN1CV192?feedType=RSS&feedName=
technologyNews (accessed August 24, 2018).

Ziegler, C., 2016, Kia Launches Drive Wise Brand to Build Self-Driving Cars by 2030,
The Verge. Retrieved from http://money.cnn.com/2016/07/01/technology/bmw-
intel-mobileye/, Viewed August 24, 2018.

Zimmer, J., 2016, The Third Transportation Revolution [Web log post], Medium.com.
Retrieved from https://medium.com/@johnzimmer/the-third-transportation-
revolution-27860f05fa91 (accessed August 24, 2018).

Copyright of Risk Management & Insurance Review is the property of Wiley-Blackwell and
its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.

C© Risk Management and Insurance Review, 2011, Vol. 14, No. 2,

299

-309
DOI: 10.1111/j.1540-6296.2011.01200.x

EDUCATIONAL INSIGHTS

USING TECHNOLOGY TO ENCOURAGE CRITICAL THINKING
AND OPTIMAL DECISION MAKING IN RISK
MANAGEMENT EDUCATION
John Garvey
Patrick Buckley

ABSTRACT
This article draws a link between the risk management failures in the financial
services industry and the educational philosophy and teaching constraints at
business schools. An innovative application of prediction market technology
within business education is proposed as a method that can be used to encourage
students to think about risk in an open and flexible way. This article explains
how prediction markets also provide students with the necessary experience to
critically evaluate and stress-test quantitative risk modeling techniques later in
their academic and professional careers.

INTRODUCTION
The financial and economic crisis that we continue to endure presents a serious challenge
to the teaching and learning strategies employed in universities. Business graduates are
expected to have a deep knowledge of the theory that forms the bedrock of the financial
system as well as the mathematical competence necessary to apply asset pricing and risk
management methodologies. However, the techniques and models used to control and
manage risk are often taught in an environment that does not provide sufficient space
and time for rigorous debate and critical analysis.

Students are often presented with subject knowledge in a way that the content has al-
ready been carefully selected and sequenced by their lecturer. The education literature
already notes that this method of providing teaching materials prevents an active learn-
ing dynamic (Kinchin, Chadha, and Kokotailo, 2008). In the early stages of university
business programs, the often large class sizes limit the opportunity for students to engage
in realistic decision-making scenarios. The project described in this article is founded on
providing students with an early testing ground for the application of risk management
theory. The creation of a closed market populated by other class members is a departure
from the traditional approach where students learn about the use of statistical mea-
sures of risk such as standard deviation and correlation and become familiar with their

John Garvey is a Lecturer in Risk Management and Insurance, Kemmy Business School, Uni-
versity of Limerick; e-mail: john.garvey@ul.ie. Patrick Buckley is at the Kemmy Business School,
University of Limerick; e-mail: Patrick.buckley@ul.ie.

299

300 RISK MANAGEMENT AND INSURANCE REVIEW

practical relevance to industry standards such as beta or value-at-risk through lectures
and formulaic practice. The application by students of statistical methods in a real-time
insurance market demonstrates the relevance of human behavior and expectations in
driving market dynamics.

Beyond the confines of the university campus we can observe increasing pressure on the
insurance system to underwrite risks previously considered uninsurable. The insurance
system is absorbing potential claims associated with catastrophic risks posed by natural
hazards such as earthquakes and windstorms and in some cases man-made hazards
associated with technologies as nuclear, biological, and chemical engineering. This trend
is occurring at a time when the industry is beset by narrower profits as large volumes of
capital compete for a limited range of risks. There is now a large category of insured risks
that are being priced and underwritten using techniques that do not apply the age-old
mathematical comforts of the law of large numbers and the central limit theorem.

This article describes an innovative teaching mechanism that has been applied to a
large group of undergraduate students at the Kemmy Business School, University of
Limerick. We document how the teaching and learning environment has been dramati-
cally changed through the introduction of a prediction market where students estimate
and transfer insurance risks. The market structure encourages students to think about
risk outside the confines of the lecture theatre. The competitive nature of the mar-
ket and the sparse historical information that is made available require students to
explore the strengths and limitations of traditional risk management techniques. Impor-
tantly, the students’ participation in this dynamic and complex environment coincides
with their introduction to formal ways of thinking about risk management. Because
of this, the market activity provides a reference point during lectures so that students
engage in dialogue and listen in an open and flexible way. The dynamic nature of the
market and its direct and timely link with the course content encourages students to
learn at a “deep” level. It provides them with skills that they can bring to bear in the
learning process outside of the specifics of this module.

In this article, we document the prediction market structure as it is used in an under-
graduate risk management module taken by 430 undergraduate students. The module
is an introduction to a specialty stream in risk management and insurance. Graduates
in this specialty go on to work in roles as varied as risk analysis, insurance and rein-
surance underwriting, and fund management. These roles primarily require an ability
to accurately identify and assess risks using historical data in a variety of quantitative
risk models. In practice, risk decision making is also influenced by the existing risk
profile of the organization, the requirements of regulators, as well as pressures relating
to performance. The many technical skills required in risk decision making must often
be applied with subjective elements of judgment. The prediction market allows students
to observe the reflexive nature of their decisions in a dynamic environment.

The article is structured as follows. The Introduction section introduces the motivation
for the current study. “Risk, Insurability and Education” provides a context for the
use of prediction markets in risk management education by focusing on the challenges
faced by the insurance industry and the changing nature of insurability. “The Insurance
Loss Market” discusses the importance of class interaction and critical thinking in the
context of education and risk management. This section also describes the design of the
Insurance Loss Market. “Results on Risk Decision Making and Learning” describes the

USING TECHNOLOGY TO ENCOURAGE CRITICAL THINKING AND OPTIMAL DECISION MAKING 301

results of the research and examines the effectiveness of prediction markets in engaging
students and augmenting learning outcomes. “Conclusions” discusses the future of risk
management education and the development of innovative techniques that inform risk
decision making.

RISK, INSURABILITY, AND EDUCATION
Within the education environment and business schools in particular, the constraints of
time and demands from employers for practical and technical knowledge leaves little
space for the exploration of how decisions are made in the absence of known ex ante
probability distributions. Third-level education in risk management focuses on how
practitioners undertake decisions when faced with ex ante probability distributions that
are known. Graduates who specialize in risk management and finance learn a great deal
about the quantitative and technical aspects of risk decision making. Popular, quanti-
tative models, such as value-at-risk, are a generally incorporated into taught modules
at both undergraduate and postgraduate levels. In this teaching environment, Frank
Knight’s important distinction between risk and uncertainty is rarely linked directly to
industry practice and is likely to be relegated to an historical artifact (Knight, 1921).

The assumption that we can accurately estimate ex ante probability distributions is the
foundation for many of the risk models used by the insurance and banking industries
and interpreted by regulators. For academics, both as researchers and as teachers there
is a recognition that effective business education should provide students with the op-
portunity to actively apply and evaluate decision making in an environment that closely
approximates real-world decisions. In this article, we show that this can be achieved by
providing student’s with this opportunity early in an undergraduate business program
before their perspective on risk is influenced by traditional thinking and contemporary
risk models. As we can observe from the ongoing financial crisis, of the set of risks that
are priced and managed within the financial system an increasing proportion extend
beyond the limiting parameters required for models such as value-at-risk. If they are to
become effective risk management professionals, it is important that graduates become
aware of the Knightian uncertainty of the real world, rather than imposing a strict mathe-
matical framework on their decisions. The management of uncertainty can be achieved
much more effectively through conservatism and avoidance and simple diversification
methods where possible.

There is a growing awareness that traditional teaching methods in risk management
and finance are somewhat narrow. This awareness has grown most acutely over the past
2 years as we have seen the near collapse of the banking system and the failure of a
number of institutions. However, the failure in risk management is the most recent and
devastating in a lineage that can be traced back through Enron, LTCM, and Barings Bank.
These risk management failures have prompted a variety of responses from corporations
and regulators. Within education, business graduates now have a greater awareness of
the limitations of quantitative risk models and there has been a general trend toward
including new subject areas such as governance and ethics for those engaged in finance
and risk management. Although this trend is laudable in some respects, a criticism of
this approach might be that graduates compartmentalize the different subject areas, and
are unlikely to later draw on issues relating to governance and ethics when they are
engaging in risk management.

302 RISK MANAGEMENT AND INSURANCE REVIEW

Within business education a number of techniques have been developed that allow stu-
dents the opportunity to apply their knowledge of relevant theory in a realistic setting.
In risk management education the breadth of case study and market applications is
proof of the need to sharpen traditional teaching techniques so that university students
fully appreciate the challenges of risk management. Projects using computer simula-
tion have been described by Hoyt, Powell, and Sommer (2007), Born and Martin (2006),
and Joaquin (2007). Hoyt et al. introduce commercially available software produced by
Riskmetrics to examine value-at-risk. Similarly, Born and Martin simply adopted the
software provided by AIR Worldwide to allow students to apply the software used
in catastrophe modeling. Joaquin describes the application of spreadsheet-based sim-
ulation in loss modeling. While these approaches are effective in allowing students to
practice and refine their skills, they are essentially static in nature and as with many risk
models there are significant model assumptions made a priori. The project described in
this article is also very different from the insurance market simulation used by Russell
(2000). Rather than simulate an insurance market, we use actual, real-time insurance
data and prediction market software. The activity of market participants (in this case,
the students) creates the pricing dynamic by evaluating likely insurance losses.

Other approaches in creating an insurance market type environment generally take
the form of a case-study-type project that requires students to recommend specific
business decisions. The application of classroom games is described by Barth et al. (2004)
and Eckles and Halek (2007). The effect of risk framing on choices under uncertainty
is explored in the games structured by Barth et al., while the impact of asymmetric
information is a specific objective in the classroom games structured by Eckles and
Halek. The dynamic environment created by an interactive prediction market provides
a forum to undertake decisions and compete against peers that is distinct from these
earlier projects. By using a prediction market and obtaining data on an underlying
“asset,” in this case state-wide insurance industry losses, we are not imposing decision
parameters on students. Instead, students evaluate and reevaluate their decision-making
criteria and learn to appreciate the emotional and psychological inputs into risk decision
making in a realistic setting.

THE INSURANCE LOSS MARKET
We describe here a prediction market structure as it is applied in an undergraduate
business program at the University of Limerick. Prediction markets are also known as
collective intelligence networks, and the software required for their operation is available
from a number of commercial providers. Prediction market platforms allow multiple
users to make forecasts about the probability of future events as diverse as movie box
office sales and election results. By forecasting a specific outcome, individual market
participants marginally influence the expected probability of that outcome. With large
numbers of market participants accurate and reliable estimates of event probabilities are
likely to emerge. The dynamic nature of the prediction market allows these probabilities
to fluctuate in real time as participants act and react to the arrival of new information.

The prediction market described here used software provided by QMarkets, one of a
number of commercial providers. The increasing popularity of prediction markets and
the greater breadth of applications have encouraged the creation of open source software
that allows users to download and create their own prediction markets. Thus, this type
of project could be easily replicated in other educational settings.

USING TECHNOLOGY TO ENCOURAGE CRITICAL THINKING AND OPTIMAL DECISION MAKING 303

We describe here the application of a prediction market that is designed specifically for
an undergraduate module, called Principles of Risk Management. This module intro-
duces students to the qualitative and quantitative skills required in risk assessment, risk
control, and risk financing. The module is delivered in a traditional format through a
series of lectures and tutorials that were offered over a 12- 1 week semester. The learning
outcomes identified are reinforced through student participation in a custom-designed
prediction market called the Insurance Loss Market (ILM). This market allows the
430 undergraduate students registered for the module to forecast weekly losses in the in-
surance industry. Specifically, students are required to predict weekly insured property
losses estimates for California, New York, and Florida.1 The details of the forecasting
and trading process are detailed in the next section. The market dynamic allows students
to activate their skills in mathematical competency and qualitative risk assessment in
real time. During each 5-day period, each student was required to undertake at least
one trade in each of the three states. The ILM was open for trading 24 hours a day and
it was run over a 10-week period. At the market close on each Friday, their forecasts
were evaluated against the gross property loss estimate as notified by data provider,
Xactware. The simplicity of the ILM interface and data provided by Xactware concealed
a sophisticated process that allowed for the provision of highly accurate data at the end
of each week.2

Market Operation
At the beginning of every week, Monday 9 a.m., each student is provided with
5,000 units in notional “risk” capital that they must allocate to loss bands in each of
the three U.S. states. Figure 1 provides a screenshot of the ILM interface. Historical data
on insurance losses for the three states are made available to the students at the beginning
of the semester, and the first 2 weeks of the semester are used to allow students to famil-
iarize themselves with the operation of the market. During this period students learn
quickly about the variability in weekly insurance losses. Given the element of “luck” in
making an accurate prediction students were required to use a number of aspects of risk
management so that their capital allocation strategy performed consistently from week
to week. As discussed in the following section, the students who performed consistently

1 The data providers, Xactware, included 5 years of loss data for each of the three states. These
were made available to students at the beginning of the semester and they were encouraged
to consult this data bank when undertaking decisions. Although there was a degree of “luck”
attached to forecasting losses, the exercise demonstrated to students how to apply historical
data could be useful, but had to be used with care. In addition, the element of accuracy required
of the students was reduced by requiring them to forecast loss bands rather than point estimates.

2 The process flow used by Xactware to generate the data can be described as a “full-cycle
claims workflow.” Each week, Xactware typically receives a first notice of loss from an insurer
that includes the type of loss, the physical address of the loss location, along with varying
amounts of supporting information dealing with coverage types/amounts, and a description
of the circumstances surrounding the loss. This information is then forwarded to either a
claims adjuster, repair contractor, independent adjuster or someone else who is responsible for
completing an estimate of repairs. That recipient connects to the Xactware network, using a
local installation of their estimating application (Xactimate), and proceeds to complete a unit
cost repair estimate of the damages. Once completed, the recipient uploads the final estimate to
the network (XactAnalysis) where Xactware mine the various data elements contained in that
detailed repair estimate.

304 RISK MANAGEMENT AND INSURANCE REVIEW

FIGURE 1
ILM Screenshot

Note: The New York market is shown. Trading activity by market participants implies that there
is a 10.6 percent probability that losses in New York will be >$9m and < = $10m for the week ending October 9, 2009.

well are those who recognize that the “luck” element can be reduced through allocating
capital across a number of loss bands in each state.

As trading activity commences the market dynamic will produce an expected distribu-
tion of likely outcomes as participants evaluate historical information, such as recent
weather patterns, insurance hazards and loss statistics as well as forward-looking infor-
mation such as hurricane development, weather forecasts, and potential hazards such as
wildfires posed by prolonged period of data. There is wide availability of new informa-
tion on weather-related hazards such as fires, windstorms, and hail as well as other rele-
vant information. Market participants must evaluate the importance of the available his-
torical information as well as the relevance of new information when making a decision.

As participants select a specific loss band, its value increases and simultaneously the
value of all other loss bands will decrease proportionately. In order to increase trad-
ing and improve liquidity, most prediction markets use an automated market maker.
When a buyer or a seller posts an order, the automated market maker automatically
fills the order and adjusts the price of the asset using a mathematical formula. In this
case, it is not necessary to match buyers and sellers. By allowing transactions to occur
immediately it reduces the complexity of the market interface, which has the effect of
lowering knowledge barriers and promoting participation (Christiansen, 2007). Detailed
descriptions of the operation of automated market markers are given by Hanson (2007)
who describes the market scoring rule and Pennock (2004) who describes the dynamic
parimutuel market maker.

USING TECHNOLOGY TO ENCOURAGE CRITICAL THINKING AND OPTIMAL DECISION MAKING 305

In a similar manner to assets traded on a liquid market, the value of units in a specific loss
band may make it prohibitive, thus forcing students to make alternative selections or
wait for the unit value of a loss band to fall. Many aspects of market activity are similar
to that carried out in the insurance markets each day as insurance and reinsurance
underwriters allocate, trade, and transfer insurance risks.

Importantly, participants in the ILM are predicting events in “real time.” This overcomes
many of the weaknesses of alternative risk-decision methodologies used in education
and industry, such as simulations using an historical event or historical asset behavior.
The type and level of activity in the market is at the discretion of each participant and
the decisions they make in this regard are seen as key part of the learning process. All
decisions are taken on an individual basis; however, consultation with classmates is
encouraged. In order to retain participation throughout the semester, ILM participants
must undertake one trade in each state each week. There is no upper limit on the
number of trades they can undertake and they can continue to trade as often as they
like (“buying” or “selling” risks) throughout the week until the ILM closes on Friday at
17:00. There are no transactions costs imposed on student portfolios. Later that day or
early the following week the actual loss estimates for that trading period are received
from Xactware. The closing position of each participant is reconciled against the actual
loss data and is used to estimate the value of each student’s portfolio, as shown in
Equation (1).

PortfolioA= Cash Balance + (UnitsCA × 100) + (UnitsFL × 100) + (UnitsNY × 100). (1)

The portfolio value for Participant A is calculated as the number of units they hold in
the correct loss band for each U.S. state multiplied by 100 (100 percent) plus the cash
they did not allocate. The metric for evaluating activity and decision making in the ILM
places primary importance on the forecasting accuracy.

RESULTS ON RISK DECISION MAKING AND LEARNING
The primary objective of this research is to create a challenging learning environment for
risk management students. This environment should encourage a more critical perspec-
tive on risk decision making and the popular quantitative techniques that are applied in
practice. One of the interesting aspects of using the prediction market was the immediate
change in mindset that it produced among the students taking the module in Principles
of Risk Management. As mentioned, the ILM was live for a 10-week period during the
fall semester 2009. This was preceded by 1 week in which students were encouraged to
access the ILM for a trial period of 1 week. The simplicity of the questions and the nature
of the underlying risks being evaluated facilitated immediate participation by a large
proportion of the class. During the initial weeks of the semester very few instructions
were provided to participants.

The minimal level of guidance provided during this initial phase was deliberate and
it had the desired effect of creating discomfort among participants as they attempted
to evaluate the possible range of gross property losses in New York, Florida, and Cal-
ifornia during that week. This “hands-off” approach allowed participants to evaluate
the decisions they were making in an unbounded atmosphere, with little consideration
for the norms recommended by risk management theory and practice. This approach

306 RISK MANAGEMENT AND INSURANCE REVIEW

gave rise to informal queries from students during the trial week, such as: “What is
the right approach?,” “When should I decide on the appropriate loss band?” as well as
other comments that included, “Isn’t this just gambling” and “It is hard to get enough
data to make a decision.” Decision making (trading) in the market is motivated both
by fluctuating values in a specific loss bands as it increased or decreased in popularity
and also through relevant external risk information provided by sources such as the
National Hurricane Center.

Assessment for the module was designed to promote a high level of participation
in the ILM structure.3 The level of activity in the ILM is also revealed in Figure 3,
which summarizes the average number of trades undertaken in California, Florida, and
New York. We can observe that in the initial week, there were 13.12 trades undertaken
by students in the California market, 12.74 in the Florida market, and 10.11 in the New
York market. In the 10-week period, the average number of trades undertaken showed
a marginal decrease. In the final week of the market the average number of trades for
California was 7.61, and for Florida and New York trades undertaken averaged 6.29 and
9.22, respectively. It is worth noting that, throughout the entire 10 weeks, participation
in the market exceeded the minimum participation limits that were set as part of the
module requirement.

Following the first week of live trading in the ILM, participants were provided with
historical data that gave gross property losses for each of the three states for the 5-year
period 2004 to 2008.4 The provision of this information coincided with the beginning
of a series of lectures on risk assessment and risk measurement. These lectures intro-
duced students to fundamental concepts such as randomness and variability around an
expected value as well as the useful characteristics of normality.

Students were encouraged to examine the historical loss data and explore how it could
be used in their ILM decisions. An experienced risk management professional would
immediately recognize that the historical data would provide only very crude predictive
information. For those participating in the ILM, the recognition that historical data
must be used carefully was learned though the interactive experience of evaluating and
undertaking and reversing decisions.

As the weeks progressed and students became more familiar with the dynamic of the
ILM we reduced the width of the loss bands.5 From the fifth week of live trading
on the ILM loss bands were held constant. This allowed us to evaluate progress in
participants’ ability to undertake decisions and control their risk exposure. A comparison

3 Twenty-five percent of the total marks in Principles of Risk Management were assigned to ILM
part of the module. Marks were assigned on a weekly basis with a total of 8 marks available
for participation (minimum of 1 transaction in each insurance region), 9 marks for performance
relative to peers (Maxiumum of 9 marks (top 20 percent finish, relative to peers) and declining
by 1 mark for 10 percent bands), and a maximum of 8 marks available for a one-page report on
students’ decision-making behavior in the ILM.

4 Data provided by Xactware for quarterly (3-month) periods.
5 Changes to loss bands were initiated in California in Week 4 where bands were reduced from

$5m (e.g., losses will be > = $10 million and < $15 million) to $1m (e.g., losses will be > = $10
million and < $11 million). Narrower bands were applied to all states by Week 5 and remained narrow for the remaining 5 weeks of live trading.

USING TECHNOLOGY TO ENCOURAGE CRITICAL THINKING AND OPTIMAL DECISION MAKING 307

FIGURE 2
Weekly Trading by Region on the Insurance Loss Market

FIGURE 3
Weekly Data on the Number of Positions Held by Market Participants

Note: The number of participants categorized with low-level diversification fell, while participants
holding three or more positions increased across the 10-week period.

of the distinct trends in trading behavior between Figures 2 and 3 demonstrates a
strong learning dynamic among the student population. Figure 3 shows the number
of positions (loss bands) held by market participants each week. We can see quite
clearly that there is a strong trend among participants to decrease exposure to a specific
loss band. This trend coincides with drop in the number of trades undertaken in each
week, observed in Figure 2. This shows that market participants are recognizing the
uncertainty of the environment, and although they may use historical data as a guide,
they are managing their exposure by selecting a wider range of loss bands. In this
context, the fall in the number of trades undertaken by participants appears to be a
recognition that the difficulty in profiting by actively trading insurance exposures based
on sparse information that is available to all participants.

308 RISK MANAGEMENT AND INSURANCE REVIEW

FIGURE 4
Average Number of Positions Held per Week

Note: Participants are ranked and grouped by performance.

Given the sparse historical data available, the ILM environment is one of Knightian
uncertainty and it forces participants to evaluate and manage risk without recourse to
robust statistical measures. In the early weeks of ILM activity, participants relied heavily
on the most recent weeks’ loss experience. Activity centered on one or two loss bands
while those loss bands that appeared distant from recent experience remained untraded.
Participants were undertaking highly risky behavior where a minor weather event could
easily counter their market position. The increasing use of diversification as a mechanism
for managing risk is one of the key outcomes from the market. Furthermore, when
market participants are grouped according to performance, we can see that those who
performed strongest over the 10-week period demonstrated the greatest engagement in
overall diversification.

Weekly performance was based on the value of each participant’s portfolio when the
markets were resolved at 17:00 GMT each Friday as summarized in Equation (1). Figure 4
illustrates the trading behaviors of participants ranked by their overall performance.
Those who performed strongest, the top 20th percentile, engaged in a markedly higher
level of diversification. This provides robust evidence of the validity of the ILM as a
teaching methodology in risk assessment and risk management.

CONCLUSIONS
This article describes the creation of a market in insurance losses and its application in
risk management education. The unique application of real-time insurance losses and
prediction market technology allowed students to explore the practical considerations
in managing and trading insurance exposures. Incorporating this teaching instrument
into university education has clearly had a positive impact in engaging students in the
subject area and teaching them about the dynamics underlying the insurance system.
More broadly, the use of prediction market technology in risk management education
is shown here to improve critical thinking and provide an important starting point for
introducing students to more sophisticated risk modeling and risk management tech-
niques. The availability of historical insurance loss data through commercial providers

USING TECHNOLOGY TO ENCOURAGE CRITICAL THINKING AND OPTIMAL DECISION MAKING 309

such as Xactware as well as the wide number of prediction market software means
that the project described here can be applied in other universities. Furthermore, this
approach to augmenting the teaching of risk management can by operated as a joint
venture among universities, thus allowing a larger number of participants to forecast,
trade, and discuss insurance risks in an educational setting.

REFERENCES
Barth, M., J. Hatem, and B. Yang, 2004, A Pedagogical Note on Risk Framing, Risk

Management and Insurance Review, 7(2): 151-165.
Born, P., and W. Martin, 2006. Catastrophe Modeling in the Classroom, Risk Management

and Insurance Review, 9(2): 219-229.
Christiansen, J. D., 2007, Prediction Markets: Practical Experiments in Small Markets

and Behaviours Observed, Journal of Prediction Markets, 1(1): 17-41.
Eckles, D., and M. Halek, 2007. The Problem of Asymmetric Information: A Simulation

of How Insurance Markets Can Be Inefficient, Risk Management and Insurance Review,
10(1): 93-105.

Hanson, R., 2007, Logarithmic Market Scoring Rules for Modular Combinatorial Infor-
mation Aggregation, Journal of Prediction Markets, 1(1): 3-15.

Hoyt, R., L. Powell, and D. Sommer, 2007, Computing Value at Risk: A Simulation
Assignment to Illustrate the Value of Enterprise Risk Management, Risk Management
and Insurance Review, 10(2): 299-307.

Joaquin, D., 2007, Loss Modeling Using Spreadsheet-Based Simulation, Risk Management
and Insurance Review, 10(2): 283-297.

Kinchin, I. M., D. Chadha, and P. Kokotailo, 2008, Using PowerPoint as a Lens to Focus
on Linearity in Teaching, Journal of Further and Higher Education, 32(4): 333-346.

Knight, F. H., 1921, Risk, Uncertainty, & Profit (New York: Harper & Row).
Pennock, D. M., 2004, A Dynamic Pari-Mutuel Market for Hedging, Wagering, and In-

formation Aggregation, in: Proceedings of the 5th ACM Conference on Electronic Commerce
(New York: ACM), pp. 170-179. doi:10.1145/988772.988799.

Russell, D., 2000, Two Classroom Simulations in Financial Risk Management and Insur-
ance, Risk Management and Insurance Review, 3(1): 115-124.

Copyright of Risk Management & Insurance Review is the property of Wiley-Blackwell and its content may not

be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written

permission. However, users may print, download, or email articles for individual use.

C© Risk Management and Insurance Review, 2010, Vol. 13, No. 1,

85

-109
DOI: 10.1111/j.1540-6296.2009.01175.x

PERSPECTIVE ARTICLES

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY
INSURANCE OPERATIONS
Robert Puelz

ABSTRACT
The post-Glass–Steagall era has presented insurers with new opportunities and
risks during a time when information flows and business processes are be-
ing impacted by changing technology. In this article, we explore how insurers
use and perceive current technology to carry out their operations by report-
ing results from a sample of insurers that includes some of the nation’s largest
property and casualty insurers. We find among insurers in our sample that an
online channel is having a significant impact on customer retention and rev-
enue enhancement, but a lesser impact on cost reduction. Interestingly, about
two-thirds of our sample has experienced an increase in their overall number
of transactions following the adoption on an online channel. Moreover, while
the Internet is perceived as giving marketing benefits it is not being used as a
substitute for agents. We find that 65 percent of respondents have used technol-
ogy to integrate customer data across functional areas and another 23 percent
plan to do so in the next 3 years. Nearly 71 percent of respondents have or plan
to adopt service-oriented architecture in their technology infrastructure.

INTRODUCTION
One of the tasks of insurance academicians is to help stakeholders and students of the
industry understand the functioning of insurance markets, the risk transfer that takes
place, and the business proposition of how one can maximize the wealth of an insurer’s
owners. Structural shifts in business occur for reasons attributable to knowledge, cre-
ativity, and vision; technology is often a catalyst that nurtures new ways of thinking.
The following encapsulates one insurance executive’s thinking about the industry:

Among the student body are many who will be in the next generation of leaders in
the insurance industry. They can look forward to a career with even more stimulating
challenges than the industry offers today. There will be fewer people doing things that
machines can do and more people doing those important things that only people can

Robert Puelz is the Dexter Professor of Insurance, Edwin L. Cox School of Business, Southern
Methodist University, Dallas, TX 75275; e-mail: rpuelz@mail.cox.smu.edu. I am grateful to Jerry
Johns of the Southwestern Insurance Information Services, and my communications with David
Repinski of Cunningham & Lindsay, Mike Reid of Liberty Mutual, Jim Snikeris of Farmers, and
James Lankford of Texas Farm Bureau. Finally, thanks to Robert Quirk and Henry Wyche for
research assistance. This article was subject to double-blind peer review.

85

86 RISK MANAGEMENT AND INSURANCE REVIEW

do. The most challenging aspects of these electronic methods are the human rather
than the mechanical—the decrease in routine tasks; the varied new skills which are
needed for the new jobs created; and the growing importance of research, analysis,
organization, and planning. There are truly interesting years ahead for all who are so
interested in insurance.

The quote appeared a half-century ago in the Journal of Risk and Insurance and its ap-
plicability today is remarkable. Indeed, it could be argued that some insurance firms,
caught in a managerial stasis of thought, would do well to heed the call by Bagby about
the “growing importance of research [and] analysis.” For some insurers their internal
structure has remained settled over the years with the areas of pricing, underwriting,
and claims the predominant functional areas that define this business form. Taking
an appropriate risk for a given price then honoring the contract when a loss occurs
is the essence of value provided by insurers. While we know risk transfer is as old
as the “contract of Bottomry” included in the Code of Hammurabi, the recent changes
in the legal environment and unprecedented technological innovation present oppor-
tunities not seen by insurance managers of the past.1 Gramm–Leach–Bliley (Financial
Services Modernization Act of 1999) has given a structural opportunity to other financial
institutions to enter the insurance business and vice versa. Optional federal chartering
of insurers as an alternative to state regulatory regimes is an idea that has not yet gained
significant traction in Congress but affords the opportunity to create an insurance envi-
ronment with more flexibility, choice, and competition.2 Relaxing legal strictures offers
the potential for an unencumbered, more diversified financial environment. Perils exist
for current management, however, since stakeholders expect more flexible thinking.

3

Staid and mature insurers and their management teams are not likely to exist in a more
traditional form as new competitors enter their market; consequently, insurers ought to
be ripe for new ideas that develop profitable lines of business and control costs. How
an insurer has used technology to enhance a functional area or its integration with
other operational components likely reveals the wisdom of management in enhancing
shareholder value.

The process of managing workflow is part strategic, part administrative. While the Inter-
net may be used to receive marketing inquiries, some companies with exclusive agency
arrangements weigh the benefits of direct marketing communications against disenfran-
chising the existing distribution channel. Thus, for example, Texas Farm Bureau, which is
a rural insurer with about 180 offices spread throughout Texas, takes an Internet inquiry
and feeds it to a member of its captive agency force.4 Once the application is taken, the
process is automated with technology beyond the Internet playing a role. Agents submit

1 A contract of Bottomry dates to Babylonian times where loans were forgiven if a ship suffered
a robbery loss while in transit. If the ship’s journal was uneventful, the interest charged on
the loan was higher than normal market conditions; in other words, it included an insurance
premium (see Trenerry, 1926).

2 Optional federal chartering of insurers has been studied for life insurers (see Bair et al., 2002),
and England (2005) has provided a more general synopsis on the topic that includes numerous
references to the work of Grace and Klein (2000) and Harrington (2002).

3 The academic literature is turning in this direction, too. Skipper and Kwon (2007) include a
chapter dedicated to the issue of financial services integration in their recently released textbook.

4 Thanks to James Langford of Texas Farm Bureau for sharing the operational process of his firm.

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 87

applications electronically through a company network. The underwriting process for
auto insurance begins with the raw data being fed into Choicepoint software that is
given parameters by management for the risk’s acceptability.5 In addition, an electronic
check of the new applicant’s prior carrier activity, credit rating, and motor vehicle report
is undertaken; an overall profile of the risk is created; and an electronic underwriting
determination is made. In cases where the applicant does not fit the profile established
by management, manual underwriting is undertaken as a second tier of investiga-
tion of a risk’s acceptability. In this example, the technology advantage over human
intervention is both an error elimination and scale economies bonus to the insurer’s
operations but, for this form, maintaining loyalty among its traditional distribution
channel.

A broader view of technology’s effect on the operations of property–casualty insurers
is the focus of this article. While a LOMA forum cites that 41 percent of the information
technology budget goes to core, fundamental processing of the insurance business and
only 19 percent is allocated to channel management, there is the expectation that IT
will be better utilized to help grow business if systems are in place that can support
new growth.6 Thus, one question answered by this research is what are the existing
technologies identified by insurers that will help grow their business, and where do they
expect growth to occur? An associated question is what are the existing technologies
identified by insurers that help to service their existing business? Answers to these
questions are provided in the responses from 17 insurers who responded to a survey
instrument that focused on the operational impact of technology.

One of the goals of the article is to move beyond the traditional siloed approach to
insurance operations and present current management ideas that take advantage of
technology to modify overall operations. Since insurance industry profitability in recent
years has been driven by investment performance that has offset insured losses and
operational expenses, successful methods to minimize cash outflow or turn an insur-
ance profit may reside in technological innovation. How have the operational pieces
of an insurance company’s structure become more integrated through the advent of
technology? What are the key technological innovations used in practice and how have
their utilization translated to efficiency, market opportunity, and profitability? The sur-
vey instrument utilized in this article and included in the Appendix was structured
with support from the industry through interviews and other communications. While
the analytical approach to this article is necessarily descriptive the results are revealing
about how the effective use of technology and innovation have altered the managerial
landscape in the insurance industry.

BACKGROUND TO THIS STUDY
As background, the traditional view of insurance company operations is encapsulated
in various industry texts; for example, Myhr and Markham (2004) describe three main
functional areas of an insurance company (marketing, underwriting, and claims) sup-
ported by nine additional areas outlined in Table 1.

5 See http://www.choicepoint.com/business/pc_ins/pc_ins.html.
6 See http://www.loma.org/res-08-05-SF-anaylsts.asp.

88 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE

1

Breakdown of Functional Areas

Marketing Underwriting Claims

Loss control Reinsurance

Human resources Actuarial

Legal services Investments

Information technology Premium audit

Human resources Accounting

Because an insurer has a well-defined process, the insurance business model begins with
this structure, running the risk that strategy will be considered and implemented within
these core silos without considering interactions. Myhr and Markham (2004) discuss
interdependence in the following manner, “Although each function within an insurer
must have some autonomy to perform its work, those functions are far from completely
independent. They must interact constantly if the insurer is to operate efficiently,” al-
though that is about the only attention these writers pay to the topic. Trieschmann
et al. (2005) give a different explanation of an insurer’s operations. They offer the fol-
lowing listing of insurer functions: production, underwriting, rate making, managing
claims and losses, investing and financing, accounting, and miscellaneous activities that
involve legal, marketing research, engineering, and personnel management. Pritchett
et al. (1996) are brief in their description of an insurer offering the “flow of an insurer’s
operation,” to include: management, actuarial, marketing, underwriting, administra-
tion, investments, legal, and claims. The intent of this research is to provide and quan-
tify that broader perspective. One goal of this research is to lay the foundation for a
refreshed understanding of how traditional operational areas can be melded together
by technology. This integration of the functional areas, conceptually depicted in Figure
1, overlays the distinct operational areas upon one another with technology serving as
the bonding agent or, at least, permitting managers to adopt technology as a bonding
agent.

THE SURVEY AND THE PARTICIPANTS
The survey instrument was web-based and entailed about 40 questions. The final survey
product had the benefit of input through conversations with various insurance industry
executives. The instrument was e-mail distributed through both the Southwest Insurance
Information Service (SIIS) and the National Association of Mutual Insurance Companies
(NAMIC) in which member companies were invited to participate.7 The 17 insurers

7 The initial survey emailing was handled by the NAMIC and the SIIS. The NAMIC represents
“1,400 member companies that underwrite more than 40 percent of the property/casualty insur-
ance premium in the United States” (http://www.namic.org/about/default.asp). According to
Jerry Johns, the SIIS represents “about 160 insurers” and “they write about 60 percent of property
and casualty premiums in Texas and about the same around the world.”

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 89

FIGURE 1
Functional Areas of an Insurer

TABLE

2

Insurance Company Respondents

Texas Farm Bureau Mutual Insurance Company Liberty Mutuala (6)

Farmers Insurancea (3) Mercury Insurance

Texas Windstorm Insurance Association Magna Carta Companies

Allstatea (4) Infinity Insurance Companies

American Modern Insurance Group Nationwidea (7)

Service Lloyds Insurance Company State Farm Insurancea (1)

Accredited Surety and Casualty Company, Inc. Travelersa (5)

Hochheim Prairie Farm Mutual Insurance Assoc. EMC Insurance Companies

Beacon Insurance Group

aCompanies identified as “large” in this study had total assets of at least $19 billion. The rank of
these companies by direct premiums written in 2006 in the United States is in parentheses.
See http://www.iii.org/media/facts/statsbyissue/industry/.

that responded are listed in Table 2. While the number of responding companies has
created a relatively small sample, it does include major U.S. insurers along with a
number of small insurers. The size effect on technology utilization is apparent in the
results.

90 RISK MANAGEMENT AND INSURANCE REVIEW

TABLE 3
Importance of Online Channel to Business Goals

Small insurers Average

Cost reduction 2.09

Revenue enhancement

2.18

Customer retention 2.63

Large insurers

Cost reduction 2.17

Revenue enhancement

2.67

Customer retention 2.33

All insurers

Cost reduction 2.12

Revenue enhancement 2.35

Customer retention 2.53

FINDINGS: THE ONLINE CHANNEL
A natural starting point for research into the role of technology for insurers is the role
of the Internet. Our first area of interest was the value proposition of online distribution
channels. Do insurers view this channel as an opportunity to reduce costs, enhance
revenues, or better retain their customers? Dennis Campbell (2003) has written on the
impact of electronic distribution channels in financial services and finds in his sample of
customers at a single bank that online customers exercised more transactions while also
more closely monitoring bank activity to save money from minimum balance penalties.
While Campbell finds that online customers are less profitable to a bank in the short run,
he also finds that these customers were more reliable revealing higher retention rates.

In our study survey, participants were asked to assess from low (1) to high (3) how
an online channel has helped in cost reduction, revenue enhancement, and customer
retention. The results in Table 3 indicate that from a management perspective, an online
channel does not have low impact on any of the key value propositions; the range is from
medium to high, indicating the importance to these three business goals. In particular,
customer retention appears to be a driving force in insurance management interest in
an online channel and more so among relatively smaller insurers. Larger insurers see
revenue enhancement as a key reason for having a web presence. While both Campbell’s
(2003) results and the findings herein were obtained from stakeholders who fall under
the financial services umbrella, Campbell’s sample was at the consumer level and this
survey is at the managerial level.

Interestingly, Campbell’s (2003) findings, that revenue reduction when technology is
deployed in the form of PC banking, are not mirrored by the expectation of insurer
management that an online channel will enhance revenue.

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 91

TABLE

4

Effects After Adoption of Online Channel

Small insurers Average

Proportion experiencing transaction volume increase 0.73

Proportion experiencing erosion of offline business following online

channel adoption

0.55

Large insurers

Proportion experiencing transaction volume increase 0.5

Proportion experiencing erosion of offline business following online
channel adoption

0.17

All insurers

Proportion experiencing transaction volume increase 0.65

Proportion experiencing erosion of offline business following online
channel adoption

0.41

To gain further insight into what is driving insurer expectations for online channel
performance, we asked two questions related to the value proposition that asked survey
participants to assess their results following the adoption of an online channel. First,
did total transaction volume increase following the adoption of an online channel, and
second, was there an erosion of offline business following the adoption of an online
channel? “Yes” took the value of 1 and “no” took the value of 0. While 65 percent of all
respondents indicated that total transaction volume did increase, 73 percent of smaller
insurers saw gains in total transaction volume. The overall sample results expressed
in Table 4 are consistent with Campbell’s (2003) findings for banks. The results are
also linked to the type of traditional distribution system in place among the survey
respondents.

For example, since Cummins and VanDerhei (1979), there has been evidence that those
firms that utilize an independent agency system are less efficient compared with their
exclusive agency, direct-writing counterparts, and Berger et al. (1997) show that higher
quality services offered by independent agents justify their higher expenses and par-
tially explain why both types of distribution systems were able to coexist. An online
channel changes the marketing distribution mix in today’s environment, and inter-
est among small insurers may reflect the cost effectiveness of investing in an online
channel for insurers if they have employed more expensive traditional distribution
methods. By contrast, an increase in transaction volume after establishing an online
channel was experienced by as many larger insurers as small insurers. Finally, sur-
vey evidence that the establishment of an online channel erodes current offline busi-
ness is not overwhelming. While only one in six large insurers experienced an ero-
sive impact, as many small insurers as not experienced an erosive impact. Whether
erosion is clearly attributable to the establishment of an online channel is multi-
faceted and is likely dependent on a number of market factors not addressed in this
research.

92 RISK MANAGEMENT AND INSURANCE REVIEW

Taken together, the results in Tables 3 and 4 suggest that insurance management has ex-
perienced increased business activity by adopting an online channel that has not simply
been treated as a substitute for traditional business activity. The economic experience
is less clear and depends on the marginal profitability of the online customer vis-à-vis
more traditional methods.

FINDINGS: TECHNOLOGY WITHIN OPERATIONAL AREAS
An online channel and questions about its effectiveness is a major strategic interest
of insurer management at a time when innovation pervades a variety of day-to-day
operational activities. The opportunity to move away from a bricks and mortar office
environment to a “virtual office” presents the possibility of significant cost savings
for insurers; however, it introduces concerns about employee productivity, effective
communication, and the value of an office work-setting Fritz et al. (1998) examine
satisfaction among telecommuters vis-à-vis nontelecommuters and find telecommuters
reported higher communication satisfaction, potentially alleviating upper management
concerns about introducing a virtual workplace environment. In our survey we were
interested in how insurers viewed the virtual office concept. We found that only about
half of our full sample of survey respondents indicated that the virtual office concept is
or has been important to their business strategy. Only a little more than 36 percent of
small insurers have embraced the virtual office concept, while about 67 percent of large
insurers have done so.

While the virtual office and the opportunities inherent in the Internet and online services
have a history, albeit relatively brief, a primary focus of this research is to report on
management’s view of technologies that are being driven in the current marketplace. We
broke down the survey by asking participants to focus on three key insurance business
processes that define an insurer and separate it from other financial services firms. While
many insurers, particularly large insurers, would be more aptly described as offering a
menu of financial services and products, some of which relate to insurance, we limited
our set of questions to those related to traditional insurance functions: marketing-,
underwriting-, and claims-related technology.

Marketing
Our interest in the technological impact on marketing begins with how insurers view
the development of a website as a way to market directly to consumers. Insurers were
asked to assess this question by choosing a range of choices from “not significant at
my company” to “very large impact on value of our company.” In reporting the results
we identified a small insurer by the smaller arrow ( ) and large insurers by the
larger arrow ( ).8 The impact varies considerably by size of insurer. Large insurers
recognize that such development has been measurable while small insurers have not

8 Average differences that are statistically significant are noted. Otherwise, average differences
among large versus small insurers were not found to be statistically significant. Underlying
survey data were organized in Excel. An excellent source for statistical testing using Excel can
be found at http://cameron.econ.ucdavis.edu/excel/excel.html. In addition, the numbering of
reported results for marketing, underwriting, and claims questions is consistent with the survey
instrument but are presented in the article based on the compositional approach.

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 93

noticed a significant impact. No insurer in the sample responded that a site dedicated to
direct selling to the customer has had a very large impact. By contrast, insurers appear
to see more value in a website that, while focusing on the customer, serves the purpose
of connecting the customer with an agent. Large insurers, on average, see the impact to
be significant at their company, and small insurers recognize the impact tending toward
at least measurable and noticeable. Thus, while the Internet is perceived to convey
marketing benefits to insurers, agents have not been replaced by this technological
innovation.

M2: Development of retail website focused on direct marketing to consumer∗

Please choose only one of the following:

Not significant at my company 1

At least measurable and noticeable 2

Moderate, but not very large 3

Significant at my company 4

Very large impact on value of our company 5

*statistically significant at the 0.05 level

2.50

1.36

M3: Development of retail website focused on consumer but connecting consumer
with agent∗

Please choose only one of the following:
Not significant at my company 1

At least measurable and noticeable 2
Moderate, but not very large 3

Significant at my company 4
Very large impact on value of our company 5

*statistically significant at the 0.05 level

3.83

1.91

Given agents remain important to the sales process, how does technology play a role
in insurers training their agents? We asked respondents to assess three different forms
of training delivery methods with their agents: webcasts, podcasts, and personal dig-
ital assistants (PDAs). Oloruntoba (2006) defines and discusses a variety of current
learning technologies. Webcasts represent “streaming video delivered online.” Pod-
casts and PDAs have the ability of being more mobile, and the functionality of a per-
sonal digital assistant taking the form of a “smartphone,” which is a “hybrid mobile
phone/personal digital assistant.” By contrast, a podcast “is a method of distributing
multimedia files . . . using atom syndication formats for playbacks on mobile devices like
i-pods and personal computers.”

94 RISK MANAGEMENT AND INSURANCE REVIEW

M4: Training agents through webcasts∗

Please choose only one of the following:

Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4

Very large impact on value of our company 5
*statistically significant at the 0.05 level

3.50

1.91

M5: Training agents through podcasts

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

1.5

1.09

M6: Training agents through PDAs

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

1.33

1

In our survey, the findings are clear that PDAs do not have a significant impact on
how insurers train their agents. However, webcasts are important and among large
insurers approach being significant sources of training on average. Podcasts are clearly
not significant at small insurers and only slightly less insignificant at large insurers. Part
of this explanation may reside in the fact that while a key strength of podcast technology
is as a more mobile training tool compared to a streaming webcast, a mobile device is
required to take advantage of the technology.

M7: Communication of any kind with agents using mobile data devices

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

1.83

1.81

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 95

The evidence is clear that marketing training via a handheld device does not get the
attention of the insurance industry. We asked insurers to assess whether they had any
communication at all with agents via a mobile data device, and the average

response

indicated that such communication was only approaching being noticeable and mea-
surable.9 Thus, the marketing impact of technology appears to be focused on linking
customers with agents, a slow movement by the industry toward web-based agent
training and little need or perceived value in communicating with agents via mobile
devices.

Underwriting
We are next interested in how insurers use technology to gather information necessary
to evaluate the risk profile of their insurance applicants. In an underwriting context,
information feeds knowledge and offers an insurer the opportunity to (1) gain compar-
ative advantage over their competition and (2) narrow the asymmetry that often exists
when insurance applicants know more about their intrinsic risk.10

Technology can serve a variety of points in the underwriting process so we were first
interested in whether insurers acquired their information from agent input on a website
or internal network, and whether customer applicants were permitted to self-input at
least some of their underwriting information directly. The results indicate that agent
web-based input of data has become adopted, on average, even among smaller insurers,
and that 4 of 17 insurers responded that web-based input had a very large impact on the
value of their company. Smaller insurers appeared to emphasize the web over an internal
network on average compared to their larger counterparts. By contrast, customer entry
of underwriting data has not gathered much traction at small insurers but has become
at least noticeable among large insurers.

U1: Permit customer entry of some underwriting information via a website∗

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5
*statistically significant at the 0.05 level
2.67
1.36

9 While we have attempted to maintain consistency in the numbering of this section M1, M2,
etc. between these tables and the actual survey, you will note that the survey instrument has
7 questions for the section on marketing. Survey questions M1 and M7 were duplicates. We
omitted M1 when reporting results and kept M7. Survey respondents did vary in how they
answered M1 and M7. M1 in the survey had a mean of 2.67 for large insurers and 1.90 for small
insurers.

10 Readers are encouraged to read Deborah Smallwood’s piece on how underwrit-
ing is changing, http://www.fairisaac.com/NR/rdonlyres/F1DFEA70-14D4-4A3E-A1EB-
76B4DA78943B/0/Competitive_PandC_Underwriting_Tower_Group_Oct_2004

96 RISK MANAGEMENT AND INSURANCE REVIEW

U3: Permit agent entry of underwriting information via a website

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

3.16

3.72

U4: Permit agent entry of underwriting information via an internal network

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

3.33

2.63

Once the data are received by an insurer, how are they evaluated? The utilization of
an expert system to evaluate the underwriting data is somewhat dependent on the
market niche of the insurer; for example, commercial underwriting is often more subject
to human judgment. We were interested how a respondent viewed and implemented
an expert system in their underwriting process, and the average result across the full
sample indicated that such a technology had at least a moderate impact. The result for
large insurers tended toward a significant impact at these companies, likely reflecting
that major automobile insurers were included in this sample.

U2: Implemented the use an underwriting expert system

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

3.67

2.73

More specific technologies for gathering customer information include GPS mapping
of property exposures, mileage monitors in automobiles, and whether insurers encour-
age and rely upon their customer’s self-reporting of mileage. Both GPS mapping and
mileage monitoring begin to encroach on the world of Telematics, where the com-
bination of global positioning systems with wireless communication create a real-time,

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 97

individual-specific set of metrics that can be evaluated by management to assess whether
these risk-attributing metrics are correlated with actual losses.11 In effect, automobiles
are outfitted with devices that gather and communicate driving data. If a driver uses
their car to drive “to and from work,” the ability to gather and communicate real-time
data would permit, say, the time of day when driving is occurring, the average speed
at which the car is moving, and the quantity of miles driven by day of the week. The
number of new attributes that could be created for underwriting and pricing evaluation
would be limited only by the creativity of management, and statistical evidence from
the modelers that such attributes are valid. A key value proposition being the additional
precision obtained in predicting future losses.

In our survey we focused on current practices of some insurers within the industry.
The average insurer response to GPS mapping technology to help gauge property in-
surance exposure is having at least a noticeable effect, while mileage monitors in au-
tomobiles is not yet a significant practice among insurers. While insurers can utilize
GPS mapping to help assess catastrophic exposure that can be used for “big picture”
management decisions, outfitting customer’s automobiles with devices is both costly
and potentially perceived by the customer as privacy invading. Indeed, one of the issues
raised by Holdredge (2005) and echoed by the insurance industry is whether privacy
concerns will outweigh the actuarial justification for a more focused measure of loss-
producing capability. One way around privacy concerns is self-reporting and our survey
asked whether insurers might encourage Internet facilitated self-reporting of mileage
by their insured customers. The results were not surprising given the degree of diffi-
culty for insurers to independently verify individual mileage assessment ex ante loss.
Sixteen of 17 insurers answered that such self-reporting was not significant at their
company.

To round out the exploration of underwriting we were interested in whether under-
writers were communicating with any other insurer party through the use of a mobile
data device. While smaller insurers appeared more inclined to utilize this technology in
contrast to large insurers, the overall results tend toward this communication path not
being very noticeable among insurance industry participants.

U5: GPS mapping of property exposures

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

2.66

2.36

11 Holdredge (2005) provides an insightful discussion on the basics of Telematics and how it can
provide an insurer with strategic advantage.

98 RISK MANAGEMENT AND INSURANCE REVIEW

U6: Utilize mileage monitors in cars

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

1.16
1.09

U7: Utilize self-reporting of mileage by consumers via a website

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

1.09
1.00

U8: Communication of any kind with underwriting and another party using mobile
data devices

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

1.66
1.36

Claims
Once marketing and underwriting turn a potential risk/business opportunity into a
customer, the prospect of claims inevitably becomes the subsequent consideration. Ef-
fective claims management recycles information back to underwriters and actuaries
when claims are handled in a timely and accurate manner. This integration of activities
possesses much value-added potential. Within claims the “optimization proposition”
is for insurers to accurately assess its true claim contractual obligation then pay the
obligation in the appropriate time frame subject to customer satisfaction.

Historically, claims management has relied on a manual assessment of claim validity
with quality reviews undertaken by manual review, too. Perhaps more than any of the
other functional areas of an insurer, claims management has moved farther down the
path of adopting technological innovation.12 There is now the opportunity for insurers

12 I am grateful to the insights of David Repinski, president of Cunningham & Lindsay, N.A., for
much of the background to this section.

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 99

to not only effectively handle claims in real time but to gather information from such
claims to make better future decisions.

Today, the process of adjusting a claim begins with an 800 number and insurers have
the opportunity to handle claims and validate the accuracy of claim payments utilizing
mobile hardware with a software interface that is provided by firms such as Marshall,
Swift, and Boeck (MSB) and Xactimate. These firms work as an electronic intermediary
between the insurer and the adjuster. Once notified of an incident, the insurer uploads
claim opening data to the claim vendor’s site and gives the field adjuster access. The field
adjuster downloads the basic facts of a claim, investigates the claim, and then uploads
the results of the claim investigation back to the claim vendor’s site. An advantage of
electronic communication is that it permits the stocking of a data warehouse that can
be mined for claim-handling assessments, quality testing, and adjuster performance
review.

In our survey we were interested in how insurers utilize technology to process in-
dividual claims. The questions took respondents through the process beginning with
whether they communicated with field adjusters via a cell phone when the claim was
incurred. Large insurers, on average, reported that mode to be significant at their com-
pany while small insurers reported a more moderate response at their companies. Once
the claim process had begun, we were interested in whether adjusters utilized digital
photography and digital recorders to supplement a claim file. Overall results indi-
cated that digital photography to be a significant and valuable resource to the insurer
respondents. While digital recorders in the field document aspects of the event and
thus play an important role for insurers, the overall average result was only mod-
erate. The use of portable printers in the field to give customers an on-site estimate
yielded a comparable result to digital recorders overall and clearly plays a more im-
portant role among large insurer respondents that view portable printers as significant
at their company. An obvious need for insurers that is enabled by technology is quick
communication between the field and the home office. We wanted to know about the
role of wirelessly enabled laptops that permit adjusters to update an electronic file. We
found that to be an important trend among all insurers, with the average result falling
between a “moderate” impact at their company and a “significant” impact at their
company.

C1: Communicate with adjuster via cell phone when notified of a new incurred
claim∗

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5
*statistically significant at the 0.05 level
4

2.72

100 RISK MANAGEMENT AND INSURANCE REVIEW

C2: Digital photography included in claim file to help assessment

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

3.73

4.5

C3: Use of laptops with wireless technology so field reps can update electronic file∗

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5
*statistically significant at the 0.05 level

4.67

3

C6: Use of portable printers on site so customer receives estimate in hand∗

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

*statistically significant at the 0.06 level

3.67
2.18

C7: Use of digital recorders to take a statement in the field

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

3.5

2.55

The use of external software providers to help process claims electronically is more in
favor compared with internally developed methods. Nine of 17 respondents reported
that internally developed software was not significant at their company, while overall
average results indicate that internally developed methods were reported to be some-
what more than “not significant.” By contrast, the average overall value for this choice

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 101

was substantially below external products such as MSB and Xactimate that have had at
least a moderate impact on the value of the insurer respondents.

C4: Use of MSB, Xasctimate or other external vendor software in measuring value of
property claim

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5
3.67

2.81

C5: Use of internally developed software to measure value of property claim

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5
2

1.63

How has technology permitted the centralization of call centers? The results from our
sample indicate that centralization has had a significant impact on insurer value, par-
ticularly among large insurers for whom centralization is revealed as an important
consideration. Even among our defined sample of “smaller” insurers we found that
centralization has had a moderate impact on insurer value. Finally, we were interested
about the role and use of a high technology vehicle in managing claims and communi-
cating via satellite technology. Not surprisingly, this technology has been embraced by
larger insurers that have a focus on personal lines, and the overall impact on company
value is slightly above the moderate level. Among smaller insurers the impact is nearly
measurable and noticeable. The use of field-based global positioning systems to help
locate an insurer’s customers does not play a significant role at smaller insurers with 7 of
11 respondents indicated that these devices are not significant at their company. Larger
insurers rate the impact as only moderate.

C8: Centralization of call centers∗

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

*statistically significant at the 0.05 level
4.67

2.91

102 RISK MANAGEMENT AND INSURANCE REVIEW

C9: In catastrophes use of a high-technology vehicle that communicates via satellite
technology∗

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5

*statistically significant at the 0.07 level

3.33

1.81

C10: Use of portable global positioning systems when difficult to locate damaged
property∗

Please choose only one of the following:
Not significant at my company 1
At least measurable and noticeable 2
Moderate, but not very large 3
Significant at my company 4
Very large impact on value of our company 5
*statistically significant at the 0.05 level
3
1.81

Integration
As we have seen thus far, not all aspects of the insurance industry’s operational
makeup is technology enabled, and the extent to which an insurer’s processes are
digitized and connected depends on size. We expect that economic gains from uti-
lizing technology in an information-driven business can be substantial and enhanced
if the technology is integrated across these processes. We inquired among survey re-
spondents if and how they integrate customer data across their property and liability
lines of business by asking respondents to choose one category that best describes their
insurer.

The results in Table 5 show that nearly 65 percent of insurer respondents do inte-
grate their customer data across marketing, underwriting and claims. An additional
23.5 percent of insurers expect to accomplish this task within 3 years. Whether an in-
surer is small or large, integration appears to be both key and workable. Since many
insurers are multiline we were curious about the ability of insurers to integrate their
customer data on the property and liability (P&L) side to the life and health (L&H) side
of their businesses. Only 29 percent of our respondents had an L&H presence. Among
these multiline insurers, none of them have a P&L system that “easily interfaces” with
their L&H system.

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 103

TABLE 5
Integration of Customer Data

Percentage

response

We presently find it difficult to integrate customer data across functional

areas

5.8%

We presently find it difficult to integrate customer data across functional

areas but plan to do so within the next 3 years

23.5%

We presently integrate customer data across marketing and underwriting,

but not claims

5.8%

We presently integrate customer data across underwriting, claims, and

marketing

64.7%

The likelihood of moving toward complete integration of data through businesses pro-
cesses could be enhanced because of the economics of service-oriented architecture or
SOA. As defined by He (2003), SOA “provides for a loose coupling among interacting
software agents.” By contrast, the practice of insurers evident today is “still focused
on buying point solutions at the LOB [line of business] level” (Gorman and Macauley,
2007). The distinction drawn between SOA and alternative IT solutions is metaphorically
explained by He,

Take a CD for instance. If you want to play it, you put your CD into a CD player
and the player plays it for you. The CD player offers a CD playing service. Which
is nice because you can replace one CD player with another. You can play the same
CD on a portable player or on your expensive stereo. They both offer the same CD
playing service, but the quality of service is different. The idea of SOA departs sig-
nificantly from that of object oriented programming, which strongly suggests that
you should bind data and its processing together. So, in object oriented program-
ming style, every CD would come with its own player and they are not supposed
to be separated. This sounds odd, but it’s the way we have built many software
systems.

One of the compelling features of SOA is in its flexibility it captures complexities that en-
able insurer management to enhance the value of information while lowering processing
costs. We asked survey participants their views about how they view SOA contributing to
their management strategy. We found the prospect of widespread SOA adoption persua-
sive, as nearly 73 percent of small insurers and 67 percent of large insurers have SOA as ei-
ther a current or planned approach to their technology infrastructure. During the survey,
insurers were given the chance to offer their view about the benefits of SOA to their firm.
One respondent noted that SOA would “foster the environment needed to effectively
create, utilize, promote, and support reusable artifacts (i.e., use cases, process maps, data

104 RISK MANAGEMENT AND INSURANCE REVIEW

models, patterns, software components, test cases, etc.), and to provide centralized sup-
port for communications.” Similarly, one insurer noted that “we expect to have business
logic and functionality written once and maintained in single software modules for ease
of maintenance and reuse.” While one insurer summarized SOA advantages in terms
of cost reduction, noting that their company would be able to have “efficiency gains
in staffing” and that now there would be a “single portal for all agency and consumer
transactions,” two other insurers expressed a top line impact, noting that speed to market
would increase when changes have to be made quickly and that SOA permitted “rapid
deployment of new applications.” One notable barrier to the widespread adoption of
SOA, as noted by Gorman and Macauley (2007), is the tension the exists between a lack of
standards or comparability in data and technology from one insurer to the next, and the
economies necessary for third-party vendors to come up with effective and creative SOA
applications.

CONCLUDING REMARKS
This research has reported on how technology is currently shaping business prac-
tice in the insurance industry by examining a number of innovations related to tra-
ditional insurance operational areas from a set of 17 respondents that, while lim-
iting in number, included several of the largest insurers in the United States mar-
ket. When respondents were presented with a broader, senior-level query to cite
those implementations of technology that have had a significant impact on company
value over the past 10 years, we learned that electronic communication of business
processes such as agent and consumer portals and the paperless office have been
key.

While the findings also reveal more recent pervasive technology utility within mar-
keting, underwriting, and claims, a significant finding is the extent to which insurers
are embracing integration and use of customer data across their traditional practice ar-
eas, which is facilitated by technology advances coupled with the prospects for SOA.
The internal synergy helps management create an information currency that brings
new aspects of business within their control for evaluation and strategy. How man-
agement effectively utilizes today’s technology to streamline existing processes is one
of the value increasing opportunities that will separate winners from losers in the
future.

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 105

APPENDIX

106 RISK MANAGEMENT AND INSURANCE REVIEW

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 107

108 RISK MANAGEMENT AND INSURANCE REVIEW

REFERENCES
Bagby, W. S., 1957, Automation in Insurance, Journal of Risk and Insurance, 24: 158-167.
Berger, A. M., M. A. Weiss, and J. D. Cummins, 1997, The Coexistence of Multiple

Distribution Systems for Financial Services: The Case for Property-Liability Insurance,
Journal of Business, 70: 515-546.

Bair, S. L. et al., 2004, Consumer Ramifications of an Optional Federal
Charter for Life Insurers. World Wide Web: http://www.isenberg.umass.edu/
finopmgt/uploads/textWidget/2494.00004/documents/bair-cons-ramifications .

Campbell, D., 2003, The Cost Structure and Customer Profitability Implications of Elec-
tronic Distribution Channels: Evidence From Online Banking, Working Paper, Harvard
Business School.

Cummins, J. D., and J. VanDerhei, 1979, A Note on the Relative Efficiency of Property–
Liability Insurance Distribution Systems, Bell Journal of Economics, 10: 709-719.

England, C., 2005, Federal Insurance Chartering: The Devil’s in the Details. World Wide
Web: http://cei.org/pdf/4358 .

Fritz, M. B. W., S. Narasimhan, and H.-S. Rhee, 1998, Communication and Coordination
in the Virtual Office, Journal of Management Information Systems, 14: 7-28.

Gorman, M., and M. Macauley, May 2007, Service-Oriented Architecture: Hope or Hype
for the Insurance Market, TowerGroup.

Grace, M., and R. Klein, 2000, American Enterprise Institute.

TECHNOLOGY’S EFFECT ON PROPERTY–CASUALTY INSURANCE OPERATIONS 109

Harrington, S. E., 2002, Alliance of American Insurers.
He, H., What Is Service-Oriented Architecture? World Wide Web: http://webservices.

xml.com/pub/a/ws/2003/09/30/soa.html.
Holdredge, W. D. 2005, Gaining Position With Technology, Emphasis, 2-5.
Myhr, A. E., and J. J. Markham, Insurance Operations, Regulation, and Statutory Ac-

counting, American Institute for CPCU/Insurance Institute of America, Malvern, PA.
Oloruntoba, R., 2006, Mobile Learning Environments: A Conceptual Overview. World

Wide Web: https://olt.qut.edu.au/udf/OLT2006/gen/static/papers/Oloruntoba_
OLT2006_paper

Pritchett, S. T., J. T. Schmit, H. I. Doerpinghaus, and J. T. Athearn, 1996, Risk Management
and Insurance (Eagan, MN: West Publishing).

Skipper, H. D., and W. J. Kwon, 2007, Risk Management and Insurance: Perspectives in a
Global Economy (Malden, MA: Blackwell Publishing).

Stoll, B., and K. Cullen, 2005, Elevate Claim Performance via Technology, Emphasis,
18-21.

Trenerry, C. F., 1926, The Origin and Early History of Insurance (P. S. King & Son, Ltd.).
Trieschmann, J. S., R. E. Hoyt, and D. W. Sommer, 2005, Risk Management and Insurance

(Mason, OH: Thomson South-Western Publishing).

Copyright of Risk Management & Insurance Review is the property of Blackwell Publishing Limited and its

content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s

express written permission. However, users may print, download, or email articles for individual use.

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP