Master Public Policy Class Annotated reading #1 and 2

  

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Read the article “Factors Influencing the Use of Performance Data to Improve Municipal Services: Evidence from the North Carolina Benchmarking Project.” https://search.ebscohost.com/login.aspx?direct=true&AuthType=sso&db=eft&AN=29332084&site=ehost-live&custid=092-800

Each question is worth 30 points, for a total of 150 points. Each response should be approximately 125 words. However, you will not be penalized for a longer answer, and you can receive fuller points for a well-presented, thorough response. I am offering this as a guideline. Please use full sentences, proper grammar, spelling and punctuation, and explain or paraphrase, do not cut-and-paste. Each response does not need to be structured like a full essay, however, just a direct response to the question. The primary intent of this assignment is to demonstrate you have read and taken away the key points of the reading.

1.At the time of this publication (2008), what was the status so far regarding how municipal governments implemented performance measures, and used data meaningfully (as opposed to just having adopting measures, which, on p. 304, is defined as “design and collection of measures), or as described on p. 305, the difference between rhetoric and reality. Why is there a gap between what apparently should be done, according to the data, and what has been done?

2.What are the tangible and intangible benefits mentioned on p. 306 (general examples)? Provide two specific examples of tangible benefits discussed later in the article relative to the North Carolina experience (You may be able to answer this well in less than 150 words.)

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

3.Give three examples of how performance data regarding service improvement was helpful in the NC municipalities.

4.Distinguish between “rudimentary performance measures” and “higher order measures”. Give an example of each. Explain why higher order ones are more meaningful.

5.Why is measurement of efficiency so complex, and why should municipal leaders focus on measures of performance other than simply efficiency?

Annotated reading 2

Read the articles “Commercial Aviation: Pilots’ and Flight Attendants’ Exposure to Noise aboard Aircraft.” (Module 6: https://www.gao.gov/assets/690/688396 )
Each question is worth 30 points, for a total of 150 points. Each response should be approximately 125 words. However, you will not be penalized for a longer answer, and you can receive fuller points for a well-presented, thorough response. I am offering this as a guideline. Please use full sentences, proper grammar, spelling and punctuation, and explain or paraphrase, do not cut-and-paste. Each response does not need to be structured like a full essay, however, just a direct response to the question. The primary intent of this assignment is to demonstrate you have read and taken away the key points of the reading.

1.Why does exposure to noise matter? What government agency is primarily tasked with oversight of this problem, and why is/should be the government involved in collecting data on this problem at all?

2.The report authors do not collect data on the problem directly. Describe the method they use to conduct their analysis and reach their conclusions.

3.The authors use standards set by the oversight agency to assess how effectively airlines are in keeping noise down. What are these standards? Describe/Paraphrase.

4.The authors look at studies examining three broad categories of outcomes. What are these? How are they measured?

5.Based on what you can tell from this article, are they collecting qualitative or quantitative data, or both? Explain your response.

304 Public Administration Review • March | April 2008

Many local governments measure and report their per-

formance, but the record of these governments in actually

using performance measures to improve services is more

modest. Th e authors of this study examine patterns of

performance measurement use among a set of North

Carolina cities and conclude that the types of measures

on which offi cials rely, the willingness of offi cials to

embrace comparison, and the degree to which measures

are incorporated into key management systems distinguish

cities that are more likely to use performance measures for

service improvement from those less likely to do so.

S
urveys of local government offi cials suggest that

the practice of collecting performance measures,

at least at a rudimentary level, is fairly well

established among U.S. cities and counties ( Berman

and Wang 2000 ; GASB and NAPA 1997 ; Melkers

and Willoughby 2005; O’Toole and Stipak 2002;

Poister and Streib 1999 ). 1 Robert Behn even declares

playfully, “Everyone is measuring performance” (2003,

586). In contrast, the practice of actually using these

measures to infl uence decisions or to improve services

is less apparent and far less documented (Hatry 2002).

Clearly, local governments’ progress in using perfor-

mance measures to infl uence program decisions and

service delivery has lagged behind their pace in col-

lecting and reporting basic measures. Nevertheless,

some local governments are using their performance

measures to infl uence program decisions and improve

services. Th is article identifi es several initiatives inspired

by performance measurement among 15 North Carolina

cities engaged in a decade-long comparative performance

measurement project. It examines some of the likely

reasons for the greater use of performance measures

for service improvement decisions by this set of cities

compared to cities in general and also the reasons for

varying levels of use of measures among these 15 cities.

Measures for Reporting and More?
For many years, professional associations and others

have urged local government offi cials to measure

performance for the sake of greater accountability and

service improvement (see, e.g., ASPA 1992 ; GASB

1989 ; ICMA 1991 ; NAPA 1991 ). How, they have

asked, can governments be truly accountable unless

they not only document their fi nancial condition but

also report on service levels and, ideally, on service eff ec-

tiveness and the effi ciency of service delivery? And

how can offi cials manage departments and improve

services without performance measures? Evidently,

many offi cials saw the logic of the proponents’ advice

or succumbed to the pressure of the growing band-

wagon for performance measurement. Today, many

local governments measure performance, although

often at only the workload or output level. Typically,

they report their measures in their budget or, perhaps,

in a special report or on the government’s Web site.

Th e record of local governments in the actual use

of performance measures in managerial or policy

decisions — beyond simply reporting the numbers —

is much spottier. Noting the diff erence between the

adoption of performance measures (i.e., the design

and collection of measures) and implementation (i.e.,

actual use), Patria de Lancer Julnes and Marc Holzer

(2001) conclude that only a subset of the state and

local governments that collect measures actually use

them to improve decision making. 2 Th ese authors and

others who have attempted by means of broad surveys

to gain information on the actual use of measures fi nd

only modest evidence of implementation and, even

then, they acknowledge the possibility of overstatement

when such information is self-reported and specifi c

documentation substantiating respondents’ claims is

not required ( Poister and Streib 1999, 332 ). Although

a 1997 survey produced claims that performance mea-

sures had resulted in changes in program budgets, focus,

and decisions of city governments, Th eodore H. Poister

and Gregory Streib detected a tendency for “favorable

ratings of the eff ectiveness of these systems . . . to

outstrip reported impacts. . . . [R]elatively few substan-

tial eff ects were claimed” (1999, 334).

Behn counts eight purposes for performance measure-

ment but contends that one of the eight, fostering

David N. Ammons
William C. Rivenbark
University of North Carolina at Chapel Hill

Factors Infl uencing the Use of Performance Data to Improve

Municipal Services: Evidence from the North Carolina

Benchmarking Project

David N. Ammons is Albert Coates

Professor of Public Administration and

Government at the University of North

Carolina at Chapel Hill. He is the author of

Municipal Benchmarks: Assessing Local
Performance and Establishing Community
Standards (Sage, 2001) and Tools for
Decision Making: A Practical Guide for Local
Government (CQ Press, 2002). His research
interests include local government

management, performance measurement,

and benchmarking.

E-mail: ammons@sog.unc.edu

William C. Rivenbark is an associate

professor of public administration and

government at the University of North

Carolina at Chapel Hill. He is the coauthor

of Performance Budgeting in State and
Local Government (M. E. Sharpe, 2003). His
research interests include performance and

fi nancial management in local government.

E-mail: rivenbark@sog.unc.edu

Twin Studies
Targeting
Grassroots
Performance

The Use of Performance Data to Improve Municipal Services 305

improvement, is “the core pur-

pose behind the other seven”

(2003, 586). Th ose other

seven — to evaluate, control,

budget, motivate, promote,

celebrate, and learn — are means

to the desired end and core pur-

pose: to improve. Yet hard evidence

documenting performance mea-

surement’s impact on manage-

ment decisions and service

improvements is rare. Apart from

a relatively small set of celebrated

cases — for instance, New York

City’s CompStat, a data-driven

system that proved eff ective in

fi ghting crime (Silverman 2001; Smith and Bratton

2001 ); Baltimore’s CitiStat, which expanded the con-

cept to a wide array of municipal services (Behn

2006); and several other isolated cases reported in

various venues (see, e.g., Osborne and Gaebler 1992;

Osborne and Hutchinson 2004; Osborne and Plastrik

2000 ; Wang 2002; and the Web sites of the GASB

and ICMA) — most claims of performance measure-

ment’s value in infl uencing decisions and improving

services tend to be broad and disappointingly vague

( Melkers and Willoughby 2005 ). Even the presumed

linkage to budget decisions, although promised in

theory, is often diffi cult to detect in practice ( Joyce

1997; Melkers and Willoughby 2005; O’Toole and

Stipak 2002 ; Wang 2002).

Th e limited use of measures for much beyond public

reporting — some detractors would even say, the nonuse

by most adopters — has led some public offi cials and

employees to question the net value of collecting

measures in the fi rst place and some scholars to note

the gap between rhetoric and reality ( Berman 2002 ;

Bouckaert and Peters 2002 ; Coplin, Merget, and

Bourdeaux 2002 ; Dubnick 2005; Grizzle 2002; Kelly

2002; Poister 2003; Streib and Poister 1999 ; Weitzman,

Silver, and Brazill 2006 ). 3 Even several of the presumed

leaders in performance management enjoy reputations

that they admit are somewhat infl ated. In a recent study

of 24 cities with outstanding managing for results

reputations, one-third withdrew from the follow-up

probe phase, with several saying that they were not as

far along as their reputation would imply ( Burke and

Costello 2005 ).

Many explanations are off ered for the use or nonuse

of performance measures in local government. Some

observers point to the support of top management as

a crucial ingredient in performance measurement

success ( de Lancer Julnes and Holzer 2001 ; Page and

Malinowski 2004 ). Some suggest that interest in

performance measures among elected offi cials or citi-

zen involvement in the development and even

the collection of performance measures can be espe-

cially important or helpful ( Ho

and Coates 2004 ). Others con-

tend that performance measure-

ment is likely to have an

infl uence on important manage-

rial and policy decisions only

when steps are intentionally

taken to integrate measures into

key management systems or

decision processes — for example,

departmental objectives, work

plans, budget proposals and

decisions, and strategic planning

( Poister and Streib 1999; Clay

and Bass 2002 ).

Each of these explanations is plausible. Perhaps several

others, not listed here, are as well. To pass beyond mere

conjecture, however, a possible explanation needs to be

tested among multiple governments, ideally in a con-

trolled or semicontrolled setting in which comparisons

can be made and claims can be confi rmed. For this

study, we examine the characteristics and patterns of

performance measurement use among 15 cities

participating in the North Carolina Benchmarking

Project. Th rough their experience, we explore several

factors that appear to distinguish municipalities that

present clear evidence of the use of performance mea-

sures in the making of important decisions from others

that do not.

Implementation or actual use of performance

measurement has been defi ned by De Lancer Julnes

and Holzer to include “the actual use . . . for strate-

gic planning, resource allocation, program manage-

ment, monitoring, evaluation, and reporting to

internal management, elected offi cials, and citizens

or the media” (2001, 695). In this study, we employ

a narrower defi nition of use. For our purposes,

actual use excludes simply reporting measures or

somewhat vaguely considering measures when moni-

toring operations. 4 For us, credible claims of actual

use of performance measures require evidence of

an impact on decisions at some level of the

organization.

North Carolina Benchmarking Project
Prompted by the desire among local government

offi cials for better cost and performance data with

which to compare municipal services, the North

Carolina Benchmarking Project was established in

1995 by a set of seven municipalities and the Institute

of Government at the University of North Carolina.

Th is project is similar in principle and motive to many

other cooperative projects, but it is distinctive in at

least three ways. First, the organizers of this project

realized, more than organizers of most similar projects

have, that their undertaking would be complex, and

they resisted the temptation to compare all service

Behn counts eight purposes
for performance measurement,
but contends that one of the

eight, fostering improvement, is
“the core purpose behind the

other seven.” Th ose other
seven—to evaluate, control,
budget, motivate, promote,

celebrate, and learn—are means
to the desired end and core

purpose: to improve.

306 Public Administration Review • March | April 2008

functions. Instead, they started small and have only

gradually expanded to compare more than the original

seven functions targeted at the outset. Second, the

project has focused meticulously on cost accounting

issues and the uniform application of cost accounting

rules across participating municipalities. As a result,

project participants exhibit a greater than typical

degree of trust in effi ciency measures, typically unit

costs, developed through this project ( Ammons, Coe,

and Lombardo 2001 ). Th ird, the project has survived

for more than a decade — and only a few undertakings

of this sort can make that claim. Th e project’s

continuation is testimony to the project’s value as

perceived by the participating governments. By 2005,

the North Carolina project had grown to include 15

cities and towns (herein referred to simply as cities)

ranging in population from 24,357 to 599,771 resi-

dents. 5 Th e median population in 2005 was 144,333.

In what ways is the project valuable to participating

cities? Participants report a variety of benefi ts, both

tangible and intangible. Among the intangibles cited

are the importance of being among a group of cities

engaged in something regarded as a progressive man-

agement initiative, increased awareness of the practices

of other governments, and the project’s stimulating

eff ect in encouraging offi cials to consider service

delivery options and to make data-driven decisions.

Other reported benefi ts are more tangible and include

improved performance measurement and reporting,

handy access to better data for those instances when

the governing body or chief executive requests

comparisons, the ability to use project data in reports

and special studies, and improved service quality and

effi ciency. Perhaps the greatest test for a comparative

performance measurement project, as well as for per-

formance measurement in general, is whether the

performance data are being used to infl uence opera-

tions. A few such examples appeared early in the proj-

ect’s history. Many more have emerged in recent years.

Research Inquiry and Methodology
Judging from the remarks of local government observers

who report minimal use elsewhere, the record of

performance data use by participants in the North

Carolina project seems reasonably good and probably

surpasses that of many other local governments. If

cities participating in the project make greater use of

performance measures, why is this so? And why do

some of the participants in the project use the data to

infl uence operations more than other participants do?

In an attempt to answer these questions, the authors

queried project offi cials in the 15 cities participating

in the North Carolina Benchmarking Project in 2005

regarding their experiences and the uses being made of

project data. Unlike a random sample of cities, where

claims of performance measurement use might be

diffi cult to confi rm, the participation of these cities in

a coordinated project made confi rmation of data use

claims relatively easy. Offi cials in the 15 cities were

queried by survey during the spring of 2005 and

subsequently by on-site interviews, followed in some

cases by telephone calls and e-mail correspondence for

clarifi cation and further details. Th e survey question-

naire inquired about broad applications of performance

measures (e.g., communication with elected offi cials

and citizens, uses in support of long-range planning,

use of measures in the budget process), preferences

among measures and analytic techniques (e.g., staff

reliance on outcome, effi ciency, or output measures

and the methods used to analyze the measures), and

documented examples of performance data being

used to alter performance to reduce costs or improve

service quality. Th e responses and supporting material

revealed extensive use in some cities, showed less use

in others, and suggested possible factors infl uencing the

diff erence.

Th e approach taken in this study has advantages over

the two more common methods of performance

measurement research: the single-city case study and

the multicity survey, usually without required docu-

mentation or follow-up to confi rm respondent claims.

Th e former typically lacks the breadth supplied by

a multicity study. Th e latter, usually in the form of

fi xed-response mail survey, often produces information

of questionable reliability and relevance to performance

measurement practice and has been criticized as

“methodologically inappropriate” ( Frank and D’Souza

2004, 704 ). Without the requirement of documenta-

tion or the promise of follow-up, many local offi cials

responding to such surveys are tempted to overstate

their organization’s adoption and use of management

techniques deemed to be progressive, such as perfor-

mance measurement ( Wang 1997 ). More intensive

and thorough review on a case-by-case basis provides

greater assurance of an accurate refl ection of conditions,

as well as an opportunity to verify claims of performance

measurement uses beyond reporting. Th is study’s set

of mini-case studies — less intensive individually than

a full-scale case study but much more intensive than a

simple survey — has the advantages of modest breadth

as well as relative depth and detail. Th is approach

provided investigators the opportunity to confi rm the

assertions of municipal offi cials.

Using Performance Data for Service
Improvement
Th e fi rst instance of major impact from the use of

data occurred early in the North Carolina project’s

history, when offi cials of one of the participating cities

examined the effi ciency measures for residential refuse

collection in other cities and found their own measures

to be far out of line. Th e measures indicated high unit

costs and low worker productivity. After fi rst challenging

but eventually acknowledging the accuracy of their

counterparts’ fi gures, offi cials in this city realized that

The Use of Performance Data to Improve Municipal Services 307

the measures revealed the underutilization of labor

and equipment. Because a large section of this com-

munity was served by a private hauler whose contract

would soon expire, the city was able to discontinue

private refuse collection and extend its own operation

into that area without adding equipment or labor. Th e

annual savings totaled almost $400,000 ( Jones 1997 ).

Another city used data from the benchmarking proj-

ect to avoid a price hike from its residential refuse

collection contractor. Th e contractor had initially

insisted on a 10 percent increase in its new contract;

however, the city used project data to argue convincingly

that the contractor’s effi ciency was low and its record

of complaints was high compared to the residential

refuse performance of other project participants. Th e

contractor backed off its price hike. Still another partici-

pating city, using data from the benchmarking project

to analyze service delivery options for refuse collection,

switched to automated equipment and one-person

crews. Th at city reduced its cost per ton for refuse

collection by 30 percent between 1996 and 2004.

One of the participating cities was persuaded by project

data to introduce changes in its recycling program that

increased its waste diversion rate from 14 percent to

24 percent over a fi ve-year period, thereby extending

the life of its landfi ll. Another, alarmed by recycling

ineffi ciencies relative to its counterparts, turned to

privatization and reduced the cost per ton of recyclables

collected by 24 percent, yielding a savings of approxi-

mately $75,000 per year (Ammons 2000). By 2004,

the savings relative to the base year had grown from

24 percent to 58 percent per ton.

Project data prompted other analyses involving diff erent

departments and services in various cities. Fire service

analysis in one case revealed underutilization of staff

resources and led to the expansion of operations into

emergency medical services. Relying on data from the

project, a police study in one city revealed a level of

staffi ng that was low relative to its counterparts and

insuffi cient to meet the department’s objectives re-

garding proactive patrols. Th is prompted the hiring of

33 new offi cers. Another study led to the establish-

ment of a telephone response unit to defl ect some of

the burden placed on police offi cers, as documented

by project data. Analyses in emergency communica-

tions and fl eet maintenance in other cities revealed

instances of overstaffi ng relative to actual service

demand and led to staff reductions in these functions.

Several cities used project data to help establish

performance targets in various operations.

What factors have contributed to the use of perfor-

mance data to improve operations in these cities,

when observers so often bemoan the failure of local

governments to use performance measures for any-

thing more than performance reporting? Th e evidence

from the North Carolina project is hardly conclusive,

given the small set of cities and our reliance on self-

reporting for some of the data. Nevertheless, the ex-

tent to which respondents provided facts and fi gures

to substantiate their claims convinces us that several of

the participating governments have indeed used per-

formance data to improve service delivery. Coupled

with information about performance measurement

and performance management practices in these cities,

the patterns of data use lead us to suggest three factors

that are especially infl uential: the collection of and

reliance on higher-order measures — that is, outcome

measures (eff ectiveness) and especially measures of

effi ciency — rather than simply output measures

(workload); the willingness of offi cials to embrace

comparison with other governments or service

producers; and the incorporation of performance

measures into key management systems.

Collection of and Reliance on Higher-Order
Measures
For more than half a century, local governments have

been encouraged to measure and report their perfor-

mance ( Ridley and Simon 1943 ). Th rough the years,

most of the city and county governments that heeded

this advice gravitated toward the collection and

tabulation of simple workload measures, now often

called output measures, the most rudimentary type

of performance measures. Th ese measures recorded

only the number of units of service produced — for

example, applications processed, meters read, arrests

made, or tons of asphalt laid. Workload measures

had the advantage of simplicity: Th ey were easy to

count and easy to report. If an audience or reader

could be impressed by the volume of activity under-

taken by a department or program, these measures

could serve that purpose. Workload measures answer

the easiest question: How many? However, they are ill

suited for answering more managerially challenging

questions: How effi ciently? How eff ectively? Of

what quality?

Local government offi cials who engaged in perfor-

mance measurement strictly to satisfy their obligation

for accountability could do so, at least in a narrow

sense, with the least expense and bother by focusing

on workload measures. Raw counts of governmental

activity would produce big, impressive numbers and

would demonstrate, perhaps, that departments and

employees were busy. Because they were easy to count

and compile, the collecting of workload measures

would impose minimal disruption and expense on

operating departments. Nevertheless, some operating

offi cials grumbled about devoting any time and

resources to the collection of these measures, for they

saw little use being made of them. More than a few

questioned whether their investment in performance

measurement, restricted entirely to workload mea-

sures, produced any operating benefi ts at all.

308 Public Administration Review • March | April 2008

As noted at the beginning of this article, the value of

performance measurement can be divided into two

broad categories. First, it supports accountability —

specifi cally, performance reporting — and second,

service improvement. Perhaps it is axiomatic that

performance measurement systems designed strictly

for the former (i.e., performance reporting), especially

when a premium is placed on ease of data collection,

are unlikely to yield much of the

latter. Systems intended solely to

assure elected offi cials, citizens,

and the media that the govern-

ment is busily engaged in a broad

array of high-volume activities

can be designed to achieve this

aim while imposing minimal

disruption and expense, if these

systems focus only on workload

measures. Unfortunately, such a

system produces feedback having

very little managerial or policy

value to operating offi cials or

government executives beyond merely documenting

whether demand for a service is up, down, or rela-

tively stable. Knowing that 45 citizens were enrolled

in the art class at the civic center, that the library had

32,000 visitors, that the water department repaired

600 meters, or that the police department made 200

arrests probably inspires few managers, supervisors,

and employees to consider strategies to improve ser-

vices. Raw workload counts simply do not inspire

much managerial thinking.

In contrast, measures focusing on service quality,

eff ectiveness, or effi ciency can cause offi cials and

employees to rethink service delivery strategies. For

instance, measures revealing that persons signing up

for a class at the civic center rarely re-enroll in an-

other, that the local library’s circulation per capita is

among the lowest in the region, that the cost per

repair is almost as much as the price of a new meter,

or that the local home burglary rate has reached an

historic high are measures of performance that are

much more likely to prompt the consideration of

alternate strategies to achieve better results. Unlike

workload measures, these measures of effi ciency and

eff ectiveness inspire managers, supervisors, and front-

line employees to diagnose the problem, if one exists,

and to devise strategies to correct it. In short, they

inspire managerial thinking.

Performance measurement systems that rely over-

whelmingly on workload measures tend to have been

designed only to satisfy a narrow view of accountability

and to do so at minimal cost in terms of resources and

disruption. Th ese systems either were not designed for

service improvement or, if service improvement was

their purpose, were poorly designed to achieve that

end. To charge them with failing to inspire performance

improvement — although true — is perhaps an excessively

harsh indictment of the offi cials who put these systems

in place, for it misconstrues the original purpose of

many of these most rudimentary attempts at perfor-

mance measurement and reporting.

Over the past few decades, the cities and counties that

are considered leaders in local government perfor-

mance measurement have supple-

mented their workload measures

with measures of effi ciency and

eff ectiveness. Th ese governments

have invested in systems designed

for accountability and service

improvement ( Halachmi 2002 )

and therefore are justifi ed in

expecting a higher return on

their investment in a more ad-

vanced system of performance

measurement. Good measures of

effi ciency and eff ectiveness are

more likely than output measures

to inspire managerial thinking about service

improvement.

Participants in a coordinated performance measurement

project, such as that in North Carolina, confront a

variety of challenges to achieving data comparability,

but they also enjoy some advantages over individual

cities or counties tackling performance measurement

alone. Project administrators and fellow participants

expose these governments to higher-order measures of

effi ciency and eff ectiveness and guide them away from

reliance on workload measures. Th ey demonstrate to

one another a variety of uses for the performance data

they collect. As a group, the city offi cials engaged in

the North Carolina project had little diffi culty respond-

ing to this study’s inquiry, easily rattling off many

examples of performance measurement’s infl uence on

local managerial recommendations and decisions. Th e

availability of higher-order measures helped make

performance measurement a more relevant management

tool in these cities than it appears to be in local govern-

ments in general, where many systems continue to

rely overwhelmingly on workload measures.

Th e importance of higher-order measures is amplifi ed

by a careful review of key distinctions among

participants in the North Carolina project, including

respondents’ comments about where they focus

their attention among the measures collected and

how they use performance data. Th e project coordi-

nators in some of the participating cities declared

that their organization focuses its attention on one

or both of the higher-order measures (effi ciency and

eff ectiveness); these offi cials did not even mention

workload measures. By and large, these were the

cities that accounted for most of the examples of

application of performance measurement for service

Over the past few decades, the
cities and counties that are
considered leaders in local
government performance

measurement have
supplemented their workload

measures with measures of
effi ciency and eff ectiveness.

The Use of Performance Data to Improve Municipal Services 309

improvement. Th eir offi cials—

the ones most often and most

intensively engaged in the ap-

plication of performance mea-

sures in key management

systems and major management

decisions—also appeared to be

the ones who most fully grasped

the value of good effi ciency and

eff ectiveness measures in these

systems and decisions and who

most fully recognized the lim-

ited value of workload mea-

sures. In contrast, coordinators

who said that their cities rely on

all three types of measures, who

said that they rely on workload

and perhaps another type, or

who were unable to say which

types are most used tended to

represent cities with average or

less than average evidence of the

actual application of perfor-

mance measurement for service improvement. Th eir

inability or unwillingness to discount the value of

workload measures perhaps betrayed their limited

attempts to apply performance measures in major

management systems and decisions.

Effi ciency Measures in Particular
Ideally, measures of effi ciency report with precision

the relationship between production outputs and the

resources consumed to produce these outputs (see,

e.g., Coplin and Dwyer 2000 ). Resources may be de-

picted as dollars or as some other representation of a

major resource element — for example, $8 per applica-

tion processed; 150 applications processed per $1,000;

2.2 applications processed per staff hour; 4,400

applications processed per full-time equivalent (FTE)

administrative clerk. Each of these examples relates

outputs to the dollars or human energy required to

produce them. Variations that also address effi ciency

include measures of utilization depicting the extent to

which equipment, facilities, and personnel are fully

utilized, and measures that gauge only roughly the

effi ciency of production processes (e.g., turnaround

time, average daily backlog, percentage completed

on schedule).

Th e pursuit of greater effi ciency has a prominent place

in the history of American government and public

administration in the 20th century, beginning with

the “cult of effi ciency” and extending to the current

insistence on accountability for the productive use of

resources ( Comptroller General 1988 ; GASB 1994;

Haber 1964; Hatry et al. 1990; Mosher 1968; Schachter

1989; Schiesl 1977 ). Typically, candidates for elective

offi ce promise to eliminate waste and administrators

at all levels of government swear allegiance to the

principles of effi ciency. In reality,

however, relatively few local

governments are particularly

adept at measuring their effi –

ciency with much precision, and

those that are have been accorded

something akin to celebrity status

by counterparts and admirers —

for example, the cities of Sunny-

vale, Indianapolis, Charlotte, and

Phoenix. Aggressive assaults on

ineffi ciency have less often been

prompted by precise measure-

ment and the desire to squeeze

another dime out of unit cost or

another half hour out of process-

ing time than by the more obvi-

ous alarms of idle employees in

full public view or budgets rising

far beyond historic levels while

vendors claim that they can do

the work more cheaply.

Privatization and managed competition, the celebrated

managing for results tactic in which municipal depart-

ments must compete with private companies or other

producers for the opportunity to deliver services, have

exposed vulnerabilities in many local government

operations that perhaps have arisen, in part, because

of the inadequate state of effi ciency measurement in

these governments. Unmeasured, untracked, and

therefore often undetected, small ineffi ciencies can

become large over a span of years, and eventually,

cost-saving alternatives for these operations become

understandably attractive.

Managed competition allows decision makers to skip

past many of the intricacies and complexities of measur-

ing effi ciency at the various stages of the production

process — stages and measures that should not be skipped

if one is truly managing performance. All that offi cials

need in order to make their managed competition

decision are a few quality-of-service standards or

measures and the bottom-line costs for the various

options. Managed competition is hardly a success

story for effi ciency measurement; instead, it signals a

surrender to the reality that many local governments

have not measured or managed their effi ciency very

well and now fi nd themselves vulnerable if offi cials are

ready to test the bottom line for selected operations.

Effi ciency measurement is not easy, even if the

concept seems simple. A measure that relates outputs

to resources with precision requires the accurate

measurement of outputs and inputs. Th e problem

for most governments lies primarily in accounting

for inputs. Th e cost accounting systems in many

local governments, if they exist at all, fail to capture

total costs. Perhaps they overlook overhead or other

Typically, candidates for elective
offi ce promise to eliminate

waste and administrators at all
levels of government swear

allegiance to the principles of
effi ciency. In reality, however,

relatively few local governments
are particularly adept at

measuring their effi ciency with
much precision, and those that

are have been accorded
something akin to celebrity

status by counterparts
and admirers—for example,

the cities of Sunnyvale,
Indianapolis, Charlotte,

and Phoenix.

310 Public Administration Review • March | April 2008

indirect costs, ignore the cost of employee benefi ts (at

least insofar as a particular program’s costs are con-

cerned), or fail to include annualized capital expenses.

In such instances, if unit costs are calculated at all,

they understate actual costs and mask ineffi ciency.

Some local governments desiring measures of

effi ciency cope with inadequate cost accounting

systems by using staff hours, labor hours, or FTE

positions to refl ect resources rather than dollars. Th is

strategy dodges many cost accounting issues within

their own system and has the additional advantage of

permitting effi ciency comparisons with other govern-

ments without worrying about diff erences in cost

accounting rules from one government to another.

Even this measure of effi ciency becomes complex,

however, when the time of a given employee must be

divided among multiple duties and diff erent outputs.

While time-logging systems introduce complications

that many operations resist, estimation techniques

introduce imprecisions that can reduce the value of

the measure as a diagnostic tool and as a reliable guide

for performance management eff orts.

In the face of these complexities, too many local

governments resort to reporting “FTEs per 1,000

population” or “cost per capita” for services overall or

for the services of a particular department. Th ese are

extremely crude measures of effi ciency, if they can be

called effi ciency measures at all. Comparisons of FTEs

per 1,000 population are favorites of local governments

that contract out one or more major functions; with

fewer of their own employees, they look good in such

comparisons, regardless of whether the privatization

strategy improves services or saves money. Costs per

capita for services overall are typically calculated by

dividing the total budget by the current population

and compared with similar fi gures for neighboring

jurisdictions or counterparts more broadly. Th ese

comparisons usually ignore diff erences in the quality

and array of services provided by the listed govern-

ments. A city government that has no responsibility

for parks or fi re services because these are handled by

a county government or special district will appear

more effi cient in a total cost per capita comparison

than its full-service counterparts that have responsibil-

ity for these costly functions. Per capita cost compari-

sons on a function-by-function basis reduce this

problem but often are plagued by cost accounting

variations from city to city.

For governments wishing to possess good effi ciency

measures and desiring to compare their effi ciency to

others, there are advantages in affi liating with a coop-

erative project that doggedly focuses on issues of cost

accounting. Most unaffi liated cities have to grapple on

their own with the problems noted in preceding para-

graphs. Some have overcome these problems and have

established good effi ciency measures that they can use

not only to track changes in their own effi ciency from

year to year but also to compare with others, although

with caution. Most, however, do not overcome the

problems noted here. Because of the inadequacies of

effi ciency measurement and lack of uniformity in cost

accounting rules, most cities and counties wishing to

compare their services with other jurisdictions are well

advised to focus primarily on measures of eff ectiveness

and quality, where cost accounting and the diff erentia-

tion of multiple duties are not at issue, and only

secondarily on measures of effi ciency.

Participants in the North Carolina project tell a

diff erent story. In this project, which focuses a large

portion of its attention on cost accounting rules and

uniformity in reporting, participants claim to rely on

effi ciency measures as heavily or in some cases more

heavily than on other categories of measures. Th e

project has produced effi ciency measures that partici-

pants consider reliable. Accordingly, project partici-

pants are less apt to ignore the messages they receive

from these measures. Because they have expended so

much eff ort on identifying costs precisely, when their

effi ciency measures suggest they are ineffi cient, they

are unlikely to dismiss the warning. Instead, they are

likely to focus on fi nding ways to correct the problem.

Th e broad array of performance management initiatives

reported in table 1 refl ects this tendency. Participating

cities are arrayed from left to right roughly according

to the level and signifi cance of their use of project data

to infl uence operations.

Among participants, some cities emphasize reliance on

effi ciency measures more than others do. Claims of

reliance on effi ciency measures appear to be necessary

but insuffi cient as predictors of extensive use of perfor-

mance measurement data. Some that claimed to use

effi ciency measures as much or more than workload or

eff ectiveness measures were not among the project

leaders in performance management applications;

however, others making this claim were among the

leaders. Participants that did not indicate use of

effi ciency measures tended not to be among the

performance management leaders.

Comparison with Others
Local governments in general exhibit diff erent levels of

enthusiasm for comparing their own performance

statistics with the statistics of others. Some eschew

interjurisdictional comparisons or engage in them only

reluctantly; some are more receptive, but only if the

comparisons are carefully controlled; and others appear

to embrace comparisons wholeheartedly. For instance,

the city of Portland, Oregon, and others voluntarily

publishing Service Eff orts and Accomplishments

Reports at the urging of the Governmental Accounting

Standards Board have featured performance comparisons

prominently ( Portland City Auditor 2003 ). Th e

growth of the performance comparison project of the

The Use of Performance Data to Improve Municipal Services 311

International City Management Association, which

included 87 cities and counties in 2004 and more than

200 by 2007, is further evidence of an enthusiasm for

comparison. 6 Representatives of each of the three

groups — reluctant comparers, willing but cautious

comparers, and enthusiastic comparers — are present

among the 15 municipalities participating in the

North Carolina project.

Some local governments engage in performance mea-

surement but insist that it is not for the purpose of

interjurisdictional comparison. Offi cials of these

governments, including one or two in the North Caro-

lina project, contend that they are more interested in

reviewing their own year-to-year performance than in

comparing performance with others. While comparison

with one’s own performance at earlier periods of time

is important, reluctance to embrace external compari-

son is odd for a participant in a project designed primar-

ily for that purpose and may reveal an underlying

distrust of performance measurement, anxiety about

the numbers being produced and what they will sug-

gest about relative standing, or a lack of confi dence in

the organization’s ability to improve performance.

Representatives of one city (city L) have been outspoken

from the start about their greater interest in year-to-year

comparisons of their own performance data than in

external comparisons, even if their offi cial response to

this study’s inquiry indicated a willingness to compare

with cities of similar size. Th is city’s reticence about

comparison with other local governments appears to

extend also to its use of data for performance manage-

ment. Th e concerns that inhibit the former apparently

also inhibit the latter. Th ese are the reluctant

comparers.

A second group — the willing but cautious comparers —

includes cities that are more open to external

comparisons but strongly prefer comparisons only

to cities of similar size, perhaps only including a

select set of cities generally considered to be of a like

nature by community leaders or citizens in general.

Some offi cials are especially restrictive in their notions

regarding suitable comparisons, preferring that the

cities not only be similar in size but also similar in

other demographic characteristics and in mode of

service delivery for whatever function is being

compared.

Despite their preference for limiting comparisons by

size, general similarity, or even more restrictive

grounds, offi cials in this second category clearly are

more open to external comparison than those in the

fi rst category who would prefer to reject it altogether.

Nevertheless, even these offi cials reveal a degree of

caution that perhaps suggests a tendency to overesti-

mate the importance of economies of scale (hence

their reluctance to be compared to larger cities), a

sense of anxiety over the use of performance compari-

son as a management report card rather than a search

for best practices, or a more modest case of the

distrust of performance measurement and lack of

confi dence in organizational response attributed

to offi cials who prefer not to engage in external

comparisons at all.

Concern that service demands and the services

themselves are diff erent in fundamental ways in

large and small cities, and that economies of scale

will favor larger communities, fuels the reluctance

of many offi cials to engage in comparison across

population ranges. While the challenges of service

delivery and expectations of service recipients diff er

from community to community, the eff ects of scale

economies are less clear than many who reject com-

parison across population ranges assume. Studies of

economies of scale for local government services

report diff erent economy-of-scale rates and ceilings

across various municipal functions and are some-

times contradictory in their fi ndings ( Ahlbrandt

1973; Boyne 1995; DeBoer 1992 ; Deller, Chicoine,

and Walzer 1988 ; Duncombe and Yinger 1993;

Fox 1980; Gyimah-Brempong 1987; Hirsch 1964 ,

1965, 1968; Kitchen 1976; Newton 1982; Ostrom,

Bish, and Ostrom 1988; Savas 1977a , 1977b;

Travers, Jones, and Burnham 1993; Walzer 1972 ).

Nevertheless, most participants in the North Caro-

lina project prefer comparisons with cities of similar

size despite ambiguous evidence of population-

related eff ects on service quality or unit costs within

the project data. Th eir preference for comparison

only with cities of similar size, even when effi ciency

measures are standardized as unit costs, reveals a

latent belief in economies of scale stronger than

the evidence of the existence and impact of such

economies supports.

Th e desire to carefully control the comparison, not

only by population but also by other factors to ensure

similarity among the comparison group, suggests a

sense of anxiety among offi cials over the possibility

that the comparison will be used as a management

report card — that is, as a gauge for assessing how

well or how poorly department heads and other

managers are doing their jobs. Unfortunately, this

anxiety can completely displace the search for best

practices and produce a benchmarking design that

limits the likelihood of breakthrough discoveries. Two

characteristics of this group of offi cials hint at their

concern that performance comparisons will be used as

a management report card. First is their insistence on

removing the population or economy-of-scale factor

from the equation, even if scale economies are weak or

nonexistent for a given function, rather than simply

controlling for these eff ects. Th is suggests a preoccu-

pation with having a “level playing fi eld.” When

pressed, few local government offi cials will contend

that all the best ideas for service delivery reside only in

312 Public Administration Review • March | April 2008

Table 1 Reported Uses of Performance Data by Cities Participating in the North Carolina Benchmarking Project

City A City B City C City D City E

Claimed uses of project data, beyond reporting
establishing performance

targets
� � � �

contracting decision/mgmt � � � �
program evaluation � � � � �
budget proposal/review � � �
other (1) � � �
Types of measures used
workload (output) � � �
effi ciency � � � � �
effectiveness (outcome) � � � �
Prefers comparison to

cities of similar size?
Unrestricted (average, best,

worst, all)
� � Prefers comparison

to project
participant
average and
selected others

Reported applications
of project data

Used to negotiate price &
establish perf stds for refuse
contract; to project service
costs for annexation; to
review staff/equipment
requests; as gauge for
redesign of service routes
and monitoring perf; to
monitor community policing
perf and deployment results;
infl uenced emergency comm
work plans, leading to
improved perf; infl uenced
staffi ng decisions and
development of work order
system in asphalt maint;
incorporated into fi re dept
goals and objectives, perf
appraisals, analysis of station
locations, and analysis for
fi re insp; used for analysis
of fl eet maint., identifying
opportunity for staff
reduction and the need for
revised vehicle replacement
schedule; prompted review
of HRM processes, goals,
staffi ng, and employee
benefi ts.

Data supported move
to automated
refuse trucks; used
to monitor refuse
collection effi ciency
and effectiveness,
and waste diversion
rate; to evaluate
requests for
additional police
personnel; low
ratio of calls per
telecommunicator
prompted analysis
of emerg comm.; to
evaluate appropriate
use of contractors
in asphalt maint.; to
evaluate fl eet maint.
operation, vehicle
replacement policy,
and set perf targets
for mechanics; to
consider comparative
employee turnover
rates in compen-
sation deliberations.

Project data used
to identify
opportunities for
more effi cient
deployment of
refuse collection
equipment and
crews, yielding
substantial
budgetary savings;
to assess the
costs and benefi ts
of backyard vs.
curbside collection
(leading to the
introduction of
voluntary curbside
program).

Project data used
to compare
costs and
workload in
fi re services,
especially fi re
inspections.

Data confi rmed benefi ts
of automated refuse
collection; data used to
evaluate contract costs;
to analyze effects of
service delivery options on
waste diversion rates; to
evaluate use of seasonal
vs. permanent staffi ng; to
evaluate perf and set work
levels in police/emergency
comm; to analyze equip
options for asphalt maint,
improving effi ciency;
data analysis led to fi re
dept taking role in EMS;
infl uenced fl eet maint
performance targets.

(1) “Other” uses noted by respondents included use of the project’s measures in annexation studies and as a reference source for responding to manager
and council requests.

cities of their size, and none would concede that larger

municipalities are always more effi cient than medium-

sized or smaller ones. By insisting that their city be

compared only with similarly sized municipalities,

they willingly sacrifi ce the possibility of learning a

valuable lesson from a larger city or a smaller city to

the belief that comparison of like cities will be a fairer

comparison. Second, the preference of some that

comparison units have the same mode of operation

for the function being examined similarly emphasizes

the importance of a level playing fi eld. If, in fact,

the comparison will be used simply to judge the per-

formance of managers, then establishing a fair basis

of comparison is indeed important. However, if the

purpose is to fi nd new ideas for improving operations,

then omitting all but those operating in a similar

fashion defeats this purpose.

Project participants in this second category — open

mostly to comparison with “like” cities — occupied

the broad middle range of participating municipalities.

Th ey were large in number and varied in their perfor-

mance management activities, including some cities

that were among the project’s leaders and others that

The Use of Performance Data to Improve Municipal Services 313

City F City G City H City I City J City K City L City M City N City O

� � � �


� �
� � � �
� �

� � � � � �
� � � � � � �
� � �
� � � �� � � � Ultimately

restricted to
like cities

Ultimately
restricted to
like cities

Depts prefer
not to
compare
with others

Project data led to
consideration of
curbside collection
and review
of equipment
type and crew
confi guration for
recycling services;
used in evaluating
police deployment
strategies;
in assessing
supervisory staff
size in emergency
communications
(staff increased); in
workforce planning
for fi re insp; in
monitoring fl eet
maint performance.

Project data
used to assess
police staffi ng
& deployment;
to adjust fi re
dept work
plan; to focus
improvement
efforts and
work planning
in building
inspections.

Project data
prompted
shift from
rear loaders
for refuse
collection to
side loaders
and smaller
crews; used
to assess
needs for
support
staff in
police
services.

None Project data
used to
assess
pros/cons of
automated
refuse
collection;
to push
recycling
vendor for
improved
performance
reporting;
to review
asphalt maint
strategies.

Data used to
assess staffi ng
& equipment
needs in
res. refuse
& leaf/litter
collection,
including
automation
options;
staffi ng needs
in police/fi re
svcs and
building
inspection;
identifi ed
and remedied
inadequacies
in perf info
for emerg
comm and
asphalt maint;
analysis of
fl eet services.

Project data
used to assess
conversion
from backyard
to curbside
collection of
residential
refuse;
provided
impetus to
analyze emerg
commun-
ications.

Project data
used to
assess
funding level
for asphalt
maint.;
comparative
stats for
fi re service
prompted
scrutiny.

Project data
used in refuse
collection
contract
negotiation; to
review asphalt
maint costs; to
analyze HRM
centralization.

None

engaged in only a few data-driven management

initiatives.

A third category, occupied consistently by only one

project participant (city A) and intermittently by

another (city C), includes local governments that

embrace comparisons even when the initial results of

these comparisons reveal their own performance to be

disappointing. Th ese are the enthusiastic comparers.

For them, the comparisons are the fi rst step in a series

of steps that lead to performance improvement. Th e

fi rst step provides the impetus for the second and the

third. Th ese cities are more likely than others to use

performance measures to improve operations. Th eir

list of management initiatives tended to be longer or

more signifi cant in terms of documented service

improvement or magnitude of budgetary impact.

Incorporating Performance Measurement
into Key Management Systems
Performance measurement has long been promoted

as a method of achieving greater accountability in

local government. Th is point seems indisputable,

but some offi cials have defi ned accountability more

314 Public Administration Review • March | April 2008

narrowly than have others. For offi cials subscribing

to the narrowest defi nition, accountability means

performance reporting, plain and simple. Th ey

believe that an accountable city or county govern-

ment will keep the governing body, media, and

citizens informed about the government’s fi nancial

condition and the performance of its major func-

tions. Th ese governments often report performance

measures in their budget documents. Some produce

separate performance reports or post performance

measures on their Web site. Th ose perceiving

accountability most narrowly may be inclined to

view performance measurement as a necessary chore

that must be done to fulfi ll their accountability

obligation. In this view, expenditures of dollars,

time, and energy to collect and report performance

measures are a cost of doing business rather than an

investment in service improvement, and as such,

this cost should be kept at a minimal level, if pos-

sible. Accordingly, many of these cities and counties

load up their performance reports with raw counts

of workload (outputs). After all, these are the

simplest and cheapest measures to collect and

tabulate — and perhaps the elected offi cials and

citizens will be impressed by the number of transac-

tions being processed or the tons of garbage being

collected. Th e higher-order measures of effi ciency

and eff ectiveness are more diffi cult to compile

and often are not attempted by offi cials taking a

minimalist view of accountability.

A broader view of accountability includes the obliga-

tion for basic performance reporting but extends

beyond the raw workload counts into dimensions of

service effi ciency, quality, and eff ectiveness. Account-

able offi cials, in this view, are responsible stewards of

the government’s resources who understand both

their obligation to provide services that balance the

community’s desires for quality and effi ciency and

their obligation to produce evidence of their perfor-

mance on this score. In order to conscientiously man-

age their operations, offi cials taking this broader view

must be able to assure themselves and others that

they are achieving reasonable levels of effi ciency and

service quality. For this, they must have reliable

measures of effi ciency and eff ectiveness (outcomes)

that will either alert them to problems and prompt

the development of new management strategies or

reassure them that they are meeting their performance

targets.

Offi cials taking the narrow view of accountability

are less likely to venture beyond workload measures

and are unlikely to try to incorporate performance

measures into key management systems. For them,

it seems rational and prudent to collect only the

simplest measures and to divert as few resources as

possible from service delivery to the measurement

of performance. Given their narrow view of account-

ability and the minimal value of raw workload counts

for management or policy decisions, they are unlikely

to use performance measures meaningfully in strategic

planning or management systems, performance con-

tracts, departmental or individual work plans, perfor-

mance targets, performance audits, program

evaluations, service improvement strategies, cost –

benefi t analyses, annexation and other special studies,

or budget proposals. Th ese uses are much more likely

to be found in local governments where offi cials take

the broader view of accountability and where perfor-

mance measurement is considered an indispensable

ingredient in performance management. In such gov-

ernments, performance measurement is a tool that

provides reassurance to the manager or supervisor

when performance is on target and sounds an alarm

when performance falls short of expectations, signal-

ing the need for focused attention and perhaps a new

strategy and helping the organization fulfi ll its obliga-

tion for conscientious management that delivers qual-

ity services in an effi cient manner.

Participating municipalities in the North Carolina

project were questioned about their use of project data

for four management purposes beyond reporting: estab-

lishing performance targets; contracting and managed

competition, including analysis of options as well as

contract design and management; program evaluation;

and budget proposals and review. Two of the cities

(cities A and B) reported all four uses. For instance, city

A, the city mentioned previously for having used

benchmarking project data to avoid a price hike from

its refuse collection contractor, clearly benefi ted from

having incorporated these data into its performance

management, contract monitoring, and budgeting

systems. We would assert that cities A and B are among

the three or four in this set that have most fully adopted

the broader defi nition of accountability. Not coinciden-

tally, these two cities also provide some of the most

extensive examples of the application of performance

data to improve operations.

Two other cities (cities D and G), recognized in

other venues for the sophistication of their manage-

ment systems and their use of performance data in

general, have incorporated less of the data from this

project into their systems and report fewer applica-

tions of the data from this project than do a few of

their counterparts. Nevertheless, their level of use

places them on the left-hand portion of table 1 . Two

other cities that report three of the four uses beyond

reporting (cities C and E) are also among the project

leaders in the tangible application of project data for

operations improvement.

Th e incorporation of performance data into manage-

ment systems is not a perfect predictor of the actual

use of performance measurement to adjust operational

processes or to improve the quality or effi ciency of

The Use of Performance Data to Improve Municipal Services 315

services. Nor is the failure to incorporate performance

data into key management systems an absolute guar-

antee that the organization will not use its measures

for service improvement. Some of the North Carolina

cities that have been slow to incorporate project data

into their management systems nevertheless have been

able to report benefi cial applications of project data.

However, even among the small set of cities engaged

in the North Carolina project, a positive relationship

between the incorporation of performance data in key

management systems and the application of these data

for service improvement is evident.

Conclusions
As the collecting of performance measures by city

and county governments has become more common,

observers have increasingly noted with disappoint-

ment the meager use of these measures to improve

the quality or effi ciency of services. As researchers seek

explanations for the use or nonuse of performance

data in local government, they might look to the

experience of the cities participating in the North

Carolina Benchmarking Project for three possibilities.

Th e experience of 15 participating municipalities

suggests that the likelihood that performance data will

be used to infl uence operations is enhanced by the

collection of and reliance on higher-order measures,

especially effi ciency measures, rather than simply work-

load or output measures; the willingness of offi cials to

embrace comparison with other governments or service

producers; and the incorporation of performance

measures into key management systems.

Acknowledgments
Th e authors gratefully acknowledge the assistance

of Dale J. Roenigk, director of the North Carolina

Benchmarking Project, in helping compile the infor-

mation for this article.

Notes
1. Although reviews of performance reporting docu-

ments have revealed the tendency of some local

offi cials to overstate their government’s measure-

ment status in surveys ( Ammons 1995; Hatry

1978; Usher and Cornia 1981 ), it is nevertheless

safe to conclude that the practice of collecting

basic measures — especially workload or output

indicators — is widespread.

2. Common failure to use performance measurement

for purposes beyond reporting is not confi ned to

state and local governments. A National Academy

of Public Administration panel examining early

federal eff orts to implement the Government

Performance and Results Act found “little evidence

in most plans that the performance information

would be used to improve program performance”

(NAPA 1994, 8).

3. Grizzle notes the common complaint “that deci-

sion makers seldom use performance information

to make decisions” (2002, 363). Berman writes

that many managers see performance measurement

as “a required management chore with few poten-

tial advantages” (2002, 349). Streib and Poister

detect only “a narrow range of benefi ts,” “few

substantial impacts,” and no signifi cant eff ects on

“bottom line issues” (1999, 119). Bouckaert and

Peters observe that the costs of performance

measurement are readily apparent, while

the “expected (or hoped for) benefi ts from

performance-based management” are “sometimes

invisible” (2002, 360). Poister notes that interest in

performance measurement waned in the 1980s

“because measures were increasingly perceived as

not making meaningful contributions to decision

making” and that lingering “skepticism remains

about both the feasibility and the utility of mea-

surement systems” (2003, 6, 272). Coplin, Merget,

and Bourdeaux contend that most government

agencies do not have systems in place that make

performance data “part of the decision-making

processes and have not made a serious commit-

ment to do so, whether they profess to or not”

(2002, 700). Weitzman, Silver, and Brazill write

that, despite the assumption that good perfor-

mance data will lead to improved decision making,

“the evidence to support the leap from better

information to better policy is not yet substanti-

ated” (2006, 397). Dubnick found “nothing in the

existing literature . . . that would provide a logical

(let alone a theoretical or empirical) link between

account giving and performance” (2005, 403).

Kelly reports, “We know a lot about how to con-

struct and report performance measures, but we

cannot say specifi cally why we go to all the trouble.

According to our best evidence, nothing much

changes as a result of adopting performance

measurement systems” (2002, 375).

4. Th ough we readily acknowledge that reporting

measures is an important use of performance mea-

surement for the purpose of accountability, the focus

of this study is the use of performance measures to

infl uence decisions and improve services. Simply

reporting measures does not necessarily refl ect

reliance on these measures for decisions. Similarly,

although we agree that performance measures should

be instrumental in the monitoring of operations,

vague assertions of that use are easily overstated and

therefore are dismissed in this study.

5. Th e 15 cities are Asheville, Cary, Charlotte,

Concord, Durham, Gastonia, Greensboro,

Hickory, High Point, Matthews, Raleigh, Salisbury,

Wilmington, Wilson, and Winston-Salem, North

Carolina. As a project participant, each city agrees

to commit administrative resources necessary to

compile its data in a timely manner and to pay an

annual fee to off set the costs borne by the univer-

sity in managing the project. Any North Carolina

municipality may join the project. For more

316 Public Administration Review • March | April 2008

information on this project, see http://www.sog.

unc.edu/programs/perfmeas .

6. See information about the International City

Management Association’s Center for Performance

Measurement at http://www.icma.org .

References
Ahlbrandt , Roger S ., Jr . 1973 . Effi ciency in the

Provision of Fire Services . Public Choice 16 ( 1 ):

1 – 15 .

American Society for Public Administration (ASPA) .

1992 . Resolution Encouraging the Use of Performance

Measurement and Reporting by Government

Organizations . Washington, DC : ASPA .

Ammons , David N . 1995 . Overcoming the

Inadequacies of Performance Measurement in

Local Government: Th e Case of Libraries and

Leisure Services . Public Administration Review

55 ( 1 ): 37 – 47 .

— — — . 2000 . Benchmarking as a Performance

Management Tool: Experiences among

Municipalities in North Carolina . Journal of Public

Budgeting, Accounting and Financial Management

12 ( 1 ): 106 – 24 .

Ammons , David N. , Charles Coe , and Michael

Lombardo . 2001 . Performance-Comparison

Projects in Local Government: Participants’

Perspectives . Public Administration Review 61 ( 1 ):

100 – 110 .

Behn , Robert D . 2003 . Why Measure Performance?

Diff erent Purposes Require Diff erent Measures .

Public Administration Review 63 ( 5 ): 586 – 606 .

— — — . 2006 . Th e Varieties of CitiStat . Public

Administration Review 66 ( 3 ): 332 – 40 .

Berman , Evan M . 2002 . How Useful Is Performance

Measurement? Public Performance and Management

Review 25 ( 4 ): 348 – 51 .

Berman , Evan M. , and XiaoHu Wang . 2000 .

Performance Measurement in U.S. Counties:

Capacity for Reform . Public Administration Review

60 ( 5 ): 409 – 20 .

Bouckaert , Geert , and B . Guy Peters . 2002 .

Performance Measurement and Management: Th e

Achilles’ Heel in Administrative Modernization .

Public Performance and Management Review 25 ( 4 ):

359 – 62 .

Boyne , G. A . 1995 . Population Size and Economies of

Scale in Local Government . Policy and Politics

23 ( 3 ): 213 – 22 .

Burke , Brendan F. , and Bernadette C. Costello . 2005 .

Th e Human Side of Managing for Results .

American Review of Public Administration 35 ( 3 ):

270 – 86 .

Clay , Joy A. , and Victoria Bass . 2002 . Aligning

Performance Measurement with Key Management

Processes . Government Finance Review 18 : 26 – 29 .

Comptroller General of the United States . 1988 .

Governmental Auditing Standards: Standards for

Audit of Governmental Organizations, Programs,

Activities, and Functions . Rev . ed . Washington, DC :

U.S. Government Printing Offi ce .

Coplin , William D. , and Carol Dwyer . 2000 . Does

Your Government Measure Up? Basic Tools for Local

Offi cials and Citizens . Syracuse, NY : Syracuse

University, Maxwell Community Benchmarks

Program .

Coplin , William D. , Astrid E. Merget , and Carolyn

Bourdeaux . 2002 . Th e Professional Researcher as

Change Agent in the Government-Performance

Movement . Public Administration Review 62 ( 6 ):

699 – 711 .

DeBoer , Larry . 1992 . Economies of Scale and Input

Substitution in Public Libraries . Journal of Urban

Economics 32 ( 2 ): 257 – 68 .

de Lancer Julnes , Patria , and Marc Holzer . 2001 .

Promoting the Utilization of Performance

Measures in Public Organizations: An Empirical

Study of Factors Aff ecting Adoption and

Implementation . Public Administration Review

61 ( 6 ): 693 – 708 .

Deller , Steven C. , David L. Chicoine , and Norman

Walzer . 1988 . Economies of Size and Scope in

Rural Low-Volume Roads . Review of Economics and

Statistics 70 ( 3 ): 459 – 65 .

Dubnick , Melvin J . 2005 . Accountability and the

Promise of Performance . Public Performance and

Management Review 28 ( 3 ): 376 – 417 .

Duncombe , William , and John Yinger . 1993 . An

Analysis of Returns to Scale in Public Production,

with an Application to Fire Protection . Journal of

Public Economics 52 ( 1 ): 49 – 72 .

Frank , Howard A. and Jayesh D’Souza . 2004 . Twelve

Years into the Performance Measurement

Revolution: Where We Need to Go in

Implementation Research . International Journal of

Public Administration 27 ( 8 – 9 ): 701 – 18 .

Fox , William F . 1980 . Size Economies in Local

Government Services: A Review . Rural Development

Research Report No. 22 . Washington, DC :

U.S. Department of Agriculture .

Governmental Accounting Standards Board (GASB) .

1989 . Resolution on Service Eff orts and

Accomplishments Reporting . Norwalk, CT : GASB .

— — — . 1994 . Concepts Statement No. 2 of the

Governmental Accounting Standards Board on

Concepts Related to Service Eff orts and

Accomplishments Reporting . Norwalk, CT : GASB .

Governmental Accounting Standards Board (GASB),

and National Academy of Public Administration

(NAPA) . 1997 . Report on Survey of State and Local

Government Use and Reporting of Performance

Measures . Washington, DC : GASB .

Grizzle , Gloria A . 2002 . Performance Measurement

and Dysfunction: Th e Dark Side of Quantifying

Work . Public Performance and Management Review

25 ( 4 ): 363 – 69 .

Gyimah-Brempong , Kwabana . 1987 . Economies of

Scale in Municipal Police Departments: Th e Case

The Use of Performance Data to Improve Municipal Services 317

of Florida . Review of Economics and Statistics 69 ( 2 ):

352 – 56 .

Haber , Samuel . 1964 . Effi ciency and Uplift: Scientifi c

Management in the Progressive Era 1890 – 1920 .

Chicago : University of Chicago

Press .

Halachmi , Arie . 2002 . Performance Measurement,

Accountability, and Improved Performance . Public

Performance and Management Review 25 ( 4 ):

370 – 74 .

Hatry , Harry P . 1978 . Th e Status of Productivity

Measurement in the Public Sector . Public

Administration Review 38 ( 1 ): 28 – 33 .

— — — . 2002 . Performance Measurement: Fashions

and Fallacies . Public Performance and Management

Review 25 ( 4 ): 352 – 58 .

Hatry , Harry P. , James R. Fountain , Jr. , Jonathan M.

Sullivan , and Lorraine Kremer . 1990 . Service

Eff orts and Accomplishments Reporting: Its Time Has

Come . Norwalk, CT : Governmental Accounting

Standards Board .

Hirsch , Werner Z . 1964 . Local vs. Areawide Urban

Government Services . National Tax Journal 17 ( 4 ):

331 – 39 .

— — — . 1965 . Cost Functions of an Urban

Government Service: Refuse Collection . Review of

Economics and Statistics 47 ( 1 ): 87 – 93 .

— — — . 1968 . Th e Supply of Urban Public Services .

Baltimore : Johns Hopkins University Press .

Ho , Alfred , and Paul Coates . 2004 . Citizen-Initiated

Performance Assessment: Th e Initial Iowa

Experience . Public Performance and Management

Review 27 ( 3 ): 29 – 50 .

International City Management Association (ICMA) .

1991 . Practices for Eff ective Local Government

Management . Washington, DC : ICMA .

Jones , Ann . 1997 . Winston-Salem’s Participation in

the North Carolina Performance Measurement

Project . Government Finance Review 13 ( 4 ): 35 – 36 .

Joyce , Philip G . 1997 . Using Performance Measures for

Budgeting: A New Beat, or Is It the Same Old Tune?

In Using Performance Measurement to Improve Public

and Nonprofi t Programs , edited by Kathryn E.

Newcomer , 45 – 61 . San Francisco :

Jossey-Bass .

Kelly , Janet M . 2002 . Why We Should Take

Performance Measurement on Faith . Public

Performance and Management Review 25 ( 4 ): 375 – 80 .

Kitchen , Harry . 1976 . A Statistical Estimation of an

Operating Cost Function for Municipal Refuse

Collection . Public Finance Quarterly 4 ( 1 ): 56 – 76 .

Melkers , Julia , and Katherine Willoughby . 2005 .

Models of Performance-Measurement Use in Local

Governments: Understanding Budgeting,

Communication, and Lasting Eff ects . Public

Administration Review 65 ( 2 ): 180 – 90 .

Mosher , Frederick C . 1968 . Democracy and the Public

Service . New York : Oxford University Press .

National Academy of Public Administration (NAPA) .

1991 . Performance Monitoring and Reporting by

Public Organizations . Washington, DC :

NAPA .

— — — . 1994 . Toward Useful Performance

Measurement: Lessons Learned from Initial Pilot

Performance Plans Prepared under the Government

Performance and Results Act . Washington, DC :

NAPA .

Newton , K. 1982 . Is Small Really So Beautiful? Is Big

Really So Ugly? Size, Eff ectiveness, and Democracy

in Local Government . Political Studies 30 ( 2 ):

190 – 206 .

Portland, Oregon, Offi ce of the City Auditor . 2003 .

City of Portland Service Eff orts and

Accomplishments : 2002 – 03 . http://www.

portlandonline.com

Osborne , David , and Ted Gaebler . 1992 . Reinventing

Government: How the Entrepreneurial Spirit Is

Transforming the Public Sector . Reading, MA :

Addison-Wesley .

Osborne , David , and Peter Hutchinson . 2004 . Th e

Price of Government: Getting the Results We Need in

an Age of Permanent Fiscal Crisis . New York : Basic

Books .

Osborne , David , and Peter Plastrik . 2000 . Th e

Reinventor’s Fieldbook: Tools for Transforming Your

Government . San Francisco : Jossey-Bass .

Ostrom , Vincent , Robert Bish , and Elinor Ostrom .

1988 . Local Government in the United States . San

Francisco, CA : Institute for Contemporary Studies .

O’Toole , Daniel E. , and Brian Stipak . 2002 .

Productivity Trends in Local Government

Budgeting . Public Performance and Management

Review 26 ( 2 ): 190 – 203 .

Page , Sasha , and Chris Malinowski . 2004 . Top 10

Performance Measurement Dos and Don’ts .

Government Finance Review 20 ( 5 ): 28 – 32 .

Poister , Th eodore H . 2003 . Measuring Performance in

Public and Nonprofi t Organizations . San Francisco :

Jossey-Bass .

Poister , Th eodore H. , and Gregory Streib . 1999 .

Performance Measurement in Municipal

Government: Assessing the State of the Practice .

Public Administration Review 59 ( 4 ): 325 – 35 .

Ridley , Clarence E. , and Herbert A. Simon . 1943 .

Measuring Municipal Activities: A Survey of

Suggested Criteria for Appraising Administration .

Chicago : International City Managers’ Association .

Savas , E. S . 1977a . An Empirical Study of

Competition in Municipal Service Delivery . Public

Administration Review 37 ( 6 ): 717 – 24 .

— — — . 1977b . Th e Organization and Effi ciency of

Solid Waste Collection . Lexington, MA : Lexington

Books .

Schachter , Hindy Lauer . 1989 . Frederick Taylor and

the Public Administration Community: A

Reevaluation . Albany : State University of New York

Press .

Schiesl , Martin J . 1977 . Th e Politics of Effi ciency:

Municipal Administration and Reform in America,

1880 – 1920 . Berkeley : University of California

Press .

318 Public Administration Review • March | April 2008

Silverman , Eli B . 1992 . NYPD Battles Crime:

Innovative Strategies in Policing . Boston :

Northeastern University Press .

Smith , Dennis C. , and William J. Bratton . 2001 .

Performance Management in New York City:

CompStat and the Revolution in Police

Management . In Quicker, Better, Cheaper:

Managing Performance in American Government ,

edited by Dall W. Forsythe , 453 – 82 . Albany, NY :

Rockefeller Institute Press .

Streib , Gregory D. , and Th eodore H. Poister . 1999 .

Assessing the Validity, Legitimacy, and

Functionality of Performance Measurement

Systems in Municipal Governments . American

Review of Public Administration 29 ( 2 ): 107 – 23 .

Travers , T. , G. Jones , and J. Burnham . 1993 . Th e

Impact of Population Size on Local Authority Costs

and Eff ectiveness . York, UK : Joseph Rowntree

Foundation .

Usher , Charles L. , and Gary Cornia . 1981 . Goal

Setting and Performance Assessment in Municipal

Budgeting . Public Administration Review 41 ( 2 ):

229 – 35 .

Walzer , Norman . 1972 . Economies of Scale and

Municipal Police Services: Th e Illinois Experience .

Review of Economics and Statistics 54 ( 4 ): 431 – 38 .

Wang , XiaoHu . 1997 . Local Offi cials’ Preferences of

Performance Measurements: A Study of Local

Police Services . PhD diss ., Florida International

University .

— — — . 2002 . Assessing Performance Measurement

Impact: A Study of U.S. Local Governments . Public

Performance and Management Review 26 ( 1 ): 26 – 43 .

Weitzman , Beth C ., Diana Silver , and Caitlyn Brazill .

2006 . Eff orts to Improve Public Policy and

Programs through Data Practice: Experiences in 15

Distressed American Cities . Public Administration

Review 66 ( 3 ): 386 – 99 .

Page 1

GAO-18-109R Commercial Aviation

441 G St. N.W.
Washington, DC 20548

November 15, 2017

The Honorable Peter DeFazio
Ranking Member
Committee on Transportation and Infrastructure
House of Representatives

Commercial Aviation: Pilots’ and Flight Attendants’ Exposure to Noise aboard Aircraft

Dear Mr. DeFazio:

Airline pilots and flight attendants, working in the cockpit and cabin, are exposed to noise as a
routine part of their jobs. This noise may come from aircraft engines during takeoff and landing
or from high-speed air flow over the fuselage during flight. Exposure to elevated noise levels
can cause permanent changes in hearing, diminished ability to communicate, and non-auditory
effects such as fatigue. The Occupational Safety and Health Administration (OSHA), which sets
and enforces standards related to working conditions,1 established a noise exposure standard
that requires employers to take certain actions when an employee’s noise exposure reaches a
level deemed to be unsafe.2 The Federal Aviation Administration (FAA) assumed responsibility
for the safety and health aspects of cockpit and cabin crewmember working environments in
1975,3 but in 2013, FAA announced in a policy statement that OSHA would have authority to
enforce its occupational noise exposure standard in the cabins of aircraft in operation, where
flight attendants work.

You asked us to provide information on noise levels experienced by crewmembers on
commercial service aircraft and their access to hearing protection. We examined: (1) what is
known about aircraft cabin and cockpit noise levels compared with occupational noise exposure
standards and (2) selected airlines’ policies on hearing protection for crewmembers.

To address these objectives we reviewed FAA’s regulations and guidance pertaining to interior
aircraft noise, the occupational noise exposure standard from OSHA, and the recommended
occupational noise exposure limit from the National Institute for Occupational Safety and Health
(NIOSH). We assessed OSHA’s data on enforcement activity related to aircraft noise from
August 2013, when OSHA assumed its authority to enforce its noise standard in the cabin, to
May 2017. We also reviewed FAA’s analysis of four safety and oversight databases to identify
reports on aircraft noise made in the previous 5 years and data from the Aviation Safety

1 OSHA is charged with enforcing the Occupational Safety and Health Act of 1970 (OSH Act), Pub. L. No. 91-596, 84
Stat. 1590.

2 29 C.F.R. § 1910.95.

3 Under 29 U.S.C. § 653(b)(1) of the OSH Act, OSHA is precluded from applying its occupational safety and health
standards to the working conditions over which a federal agency has exercised its statutory authority. FAA exercises
its statutory authority pursuant to 49 U.S.C. § 44701.

Page 2 GAO-18-109R Commercial Aviation

Reporting System (ASRS), which is a database maintained by the National Aeronautics and
Space Administration (NASA), to identify reports submitted from January 2012 through March
2017 about noise interference with onboard crewmembers’ communication.4 We excluded data
on noise concerns from malfunctioning equipment because while it may contribute to a
crewmember’s noise exposure, it does not represent normal operating conditions of an aircraft.
To determine the reliability of the data we used, we assessed agency documentation and
interviewed officials and concluded that the data were sufficiently reliable for our purposes. We
searched academic, government, and trade publications for studies that measured noise levels
inside aircraft, identifying 10 studies that met our criteria for methodological quality. Six of these
measured noise in aircraft cabins, 2 measured cockpit noise, and 2 of the 10 measured noise in
both locations. In addition, we interviewed officials from FAA, OSHA, NIOSH, seven labor
groups representing pilots and flight attendants, two aviation trade associations, the four largest
aircraft manufacturers, and eight mainline and regional airlines.5 We selected the airlines to
include those that had a range of aircraft types and that had the most passenger enplanements
in the U.S. in 2016, the most recent data available. Our interviews with these airlines provided
information on their aircraft noise tests and on hearing protection policies, and are not
generalizable to all airlines. Also, we could not confirm all of the information provided in
interviews with airlines and manufacturers, because the companies did not make the supporting
documentation available to us, citing its proprietary nature. See enclosure I for a full description
of our scope and methodology.

We conducted this performance audit from March 2017 to November 2017 in accordance with
generally accepted government auditing standards. Those standards require that we plan and
perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our
findings and conclusions based on our audit objectives. We believe that the evidence obtained
provides a reasonable basis for our findings and conclusions based on our audit objectives.

Results in Brief

While information on aircraft noise is limited, the studies and data we reviewed suggest that
aircraft cabin and cockpit noise levels likely do not exceed the OSHA standard. Of the 10
studies that we reviewed, none found noise levels that clearly exceeded this standard. FAA and
OSHA have received few complaints from crewmembers related to aircraft noise levels. For
example, crewmembers submitted two complaints about ambient aircraft noise levels to OSHA
since the FAA policy statement was issued in 2013, and no reports related to aircraft noise were
submitted to FAA’s safety and oversight-related databases in the last 5 years. Airlines and
aircraft manufacturers that we interviewed told us that noise measurements taken in their
aircraft are below the OSHA standard. However, officials from labor groups representing pilots
and flight attendants told us that while noise levels likely do not exceed the OSHA standard,
they believe crewmembers nonetheless are sometimes exposed to unsafe levels of noise that
could result in hearing loss or fatigue. Officials from all eight of the airlines we spoke with said
that they allow pilots to wear hearing protection equipment, such as noise-reducing headsets,
and officials from five of these airlines said they allow flight attendants to wear ear plugs, in

4 The ASRS receives, processes, and analyzes voluntarily submitted, anonymous aviation safety incident reports
from pilots, flight attendants, and others. The database is administered by NASA for FAA and is a public-safety data
repository.

5 Mainline airlines provide domestic and international passenger and cargo service on larger aircraft. Regional
airlines provide domestic and limited international passenger service, generally using aircraft with fewer than 90
seats, and cargo service to smaller airports. See enclosure I for full list of study participants.

Page 3 GAO-18-109R Commercial Aviation

aircraft in operation. According to officials from three of the crewmember labor groups we
interviewed, use of this equipment appears to be limited. Officials from the pilot labor groups we
spoke with said noise-reducing headsets can be expensive or uncomfortable, and some models
are not compatible with some aircraft communications systems.

We are not making any recommendations in this report.

Background

According to NIOSH, a federal research agency charged with the examination of occupational
health hazards,6 each year, approximately 22 million workers are exposed to noise levels that
may be hazardous to their hearing and may cause physiological stress, cardiovascular disease,
hypertension, and disruption of job performance. Noise is measured in units of sound pressure
called decibels with a sound level meter or a noise dosimeter.7

Occupational Noise Exposure Standards

OSHA has established an occupational noise exposure standard that requires employers to
administer a hearing conservation program when noise exposures reach 85 decibels over an 8-
hour period, which OSHA refers to as an action level.8 The program should include training,
annual hearing tests, hearing protection equipment for employees, and other actions. OSHA
also established the permissible exposure limit, which is a legal limit for employees’ exposure to
noise and is set at 90 decibels over an 8-hour period.9 OSHA determines acceptable exposure
limits using a 5-decibel exchange rate, so that for every 5-decibel increase or decrease of noise,
the allowable exposure times are reduced by half or doubled, respectively.

NIOSH, though not responsible for enforcing workplace safety, has established a voluntary
recommended exposure limit for occupational noise exposure that is different from the OSHA
action level.10 Like OSHA’s action level, NIOSH’s limit recommends that an employee’s
exposure be limited to an 8-hour time-weighted average of 85 decibels. Where NIOSH’s
recommended limit differs from the OSHA standard is in its use of a 3-decibel exchange rate,

6 The OSH Act established NIOSH, which is part of the U.S. Department of Health and Human Services’ Centers for
Disease Control and Prevention. Pub. L. No. 91-596, § 22, 84 Stat. 1590,1612. NIOSH is responsible for
recommending occupational safety and health standards and for describing safe exposure concentrations.

7 OSHA and NIOSH noise measurements are expressed in A-weighted decibels, an adjustment intended to match
the perception of loudness by the human ear. According to OSHA, examples of some common sources and their
expected noise levels are that a freight train passing at 100 feet away would be expected to result in a noise level of
around 80 decibels and a construction site would be expected to result in a noise level of around 100 decibels. A
dosimeter is a wearable sound level meter that measures and stores the sound levels experienced by the test subject
during an exposure period and calculates a time-weighted average noise value.

8 A time-weighted average is used to calculate an employee’s exposure to noise over an 8-hour day, which accounts
for the average of different exposure levels during an exposure period. 29 C.F.R. § 1910.95.

9 When noise exposure exceeds the permissible exposure limit, employers must use administrative or engineering
controls to reduce noise levels, and provide hearing protection equipment if those controls fail. 29 C.F.R. §
1910.95(b).

10 Department of Health and Human Services, National Institute for Occupational Safety and Health, Criteria for a
Recommended Standard: Occupational Noise Exposure (Cincinnati, Ohio: June 1998).

Page 4 GAO-18-109R Commercial Aviation

rather than the 5-decibel exchange rate OSHA uses.11 NIOSH’s lower exchange rate results in
shorter allowable exposures for noise levels above 85 decibels than OSHA’s action level. For
example, under the OSHA action level, an employee can be exposed to noise levels of 100
decibels for one hour, compared to 15 minutes under the NIOSH recommended exposure limit.

Sources of Interior Aircraft Noise

Aircraft generate many noises, both intermittent and continuous and located both inside and
outside the aircraft, with the greatest amounts of noise being generated by the air flow around
the aircraft, the engines, and the air conditioning systems, as illustrated in figure 1.

Figure 1: Examples of Sources of Interior Aircraft Noise

The sources and level of noise vary depending on aircraft age, engine type and location, phase
of flight, aircraft speed, and the listener’s location. For example, pilots working in the cockpit
hear the aircraft’s radio and alert systems, while flight attendants working in the cabin hear the
public address system and passenger conversations. Advances in engineering have decreased

11 OSHA uses a 5-decibel exchange rate because it determined this accounts for the time during the workday that a
worker was not exposed to noise hazards. NIOSH has stated that the 3-decibel exchange rate is the method most
firmly supported by scientific evidence for assessing hearing impairment as a function of noise level and duration.

Page 5 GAO-18-109R Commercial Aviation

cabin noise levels substantially over the years through innovations in aircraft designs and new
technologies. Examples include making the shape of the aircraft more aerodynamic; engine
modifications, such as lowering fan speeds; and technologies to reduce the amount of noise
and vibration experienced in the aircraft, such as new insulating materials and advances in
noise and vibration suppression systems that are installed in some aircraft.

FAA’s and OSHA’s Roles and Responsibilities

As noted earlier, although OSHA is responsible for working conditions for most private-sector
and some public-sector employees, in 1975, FAA asserted responsibility for the regulation of
occupational safety and health standards for aviation crewmembers.12 As part of FAA’s
airworthiness standards, FAA requires that cockpit noise and vibration levels not interfere with
the safe operation of the aircraft and that public address system announcements are audible by
the cabin’s occupants.13 However, neither regulation dictates a specific noise exposure limit.
FAA has issued guidance for airlines and manufacturers on recommended noise levels for
cockpit and certain crew rest areas in order to reduce the effect of noise on crewmembers’
sleep.14 In 2013, in response to a federal requirement,15 FAA issued a policy statement making
OSHA’s noise exposure standard, among other OSHA standards, applicable to the working
conditions of cabin crewmembers—but not pilots—on aircraft in operation.16 FAA and OSHA
agreed that OSHA would respond to complaints through written and oral communication and
would coordinate with FAA if workplace inspections were necessary.17

Information on Aircraft Noise Is Limited but Suggests Levels Do Not Exceed OSHA’s
Standard

Cabin Noise Levels

None of the eight published studies we reviewed that conducted measurements inside aircraft
cabins definitively showed noise levels in excess of OSHA’s action level.18 Direct comparisons

12 29 U.S.C. § 653(b)(1).

13 14 C.F.R. § 25.771(e), 14 C.F.R. § 25.1423(c).

14 FAA recommends that in cockpits with noise levels above 88 decibels efforts should be made to aid pilot
communication, such as installing door seals, acoustical insulation, and the use of noise-cancelling headsets or other
hearing protectors. See Department of Transportation, Federal Aviation Administration, Advisory Circular: Cockpit
Noise and Speech Interference Between Crewmembers, AC 20-133 (Mar. 22, 1989). FAA also recommends that
long-haul crew rest areas should be designed with the objective to have noise levels during cruise flight in the range
of 70 to 75 decibels. See Department of Transportation, Federal Aviation Administration, Advisory Circular: Flightcrew
Member Rest Facilities, FAA AC 117-1 (Aug. 21, 2013).

15 FAA Modernization and Reform Act of 2012. Pub. L. No. 112-95. § 829.126 Stat. 11,134.

16 Department of Transportation, Federal Aviation Administration, Occupational Safety and Health Standards for
Aircraft Cabin Crewmembers (Washington, D.C.: Aug. 21, 2013). The policy statement also included OSHA’s
standards for hazard communication (19 C.F.R. § 1910.1200) and bloodborne pathogens (19 C.F.R. § 1910.1030).
FAA determined that an aircraft is in operation from the time it is first boarded by a crewmember, before a flight, to
the time the last crewmember leaves the aircraft after completion of that flight.

17 Department of Transportation, Federal Aviation Administration and Department of Labor, Occupational Safety and
Health Administration, Occupational Safety and Health Standards for Aircraft Cabin Crewmembers: Memorandum of
Understanding between OSHA and FAA, (Washington, D.C.: Aug. 26, 2014).

18 Enclosure II provides details on each of the studies we reviewed.

Page 6 GAO-18-109R Commercial Aviation

between most of the studies and OSHA’s action level were difficult, because the studies
generally did not publish results in the 8-hour time-weighted average format with a 5-decibel
exchange rate that OSHA uses. The single study that used this format reported noise levels
below the OSHA action level on a four-engine regional jet, the Avro RJ85, when using
dosimeters to measure entire flight attendant work periods.19 The seven additional studies we
reviewed reported cabin noise levels in other formats, such as simple averages of all the
measurements taken, but generally showed average noise levels to be below 85 decibels on a
variety of jet and turboprop-powered mainline and regional aircraft such as the Boeing 737 and
777, the Airbus A321 and A330, and the Bombardier CRJ-700 and DHC-8 Q400.

We also compared the studies’ findings to NIOSH’s recommended limit, which is not a legal
requirement, but rather a recommendation. Two of the eight studies we reviewed indicated that
noise in certain types of aircraft may reach or exceed NIOSH’s recommended limit in the case
of crewmembers who work for durations longer than 8 hours. These studies reported cabin
noise levels in the format used by NIOSH’s recommended exposure level (8-hour time-weighted
average with a 3-decibel exchange rate). The aforementioned study using dosimeters on the
Avro RJ-85 concluded that 3 of 20 flight attendant shifts had noise levels in excess of the
NIOSH limit, and the other reported sound levels from one flight attendant shift on a long
duration flight that were near NIOSH’s recommended limit.

Officials from the eight airlines and four aircraft manufacturers we interviewed told us that they
conduct tests of noise onboard aircraft and have found that noise levels are consistently below
both the 85-decibel level specified in both the OSHA standard and NIOSH’s recommended limit.
Each of the aircraft manufacturers told us that they have designed cabins to meet certain noise
levels in response to customer demand and that they conduct tests to confirm these levels for
each new aircraft model. Officials from seven of the selected airlines told us that they have
conducted cabin noise level testing on their aircraft in service, generally by placing wearable
dosimeters onto flight attendants for entire work periods, and five of these airlines told us that
this testing was in response to the 2013 FAA policy statement. These officials told us that the
sound levels they measured varied by aircraft type and position in the cabin, but the recorded
noise levels were all below 85 decibels on an 8-hour time-weighted average basis.20

Cockpit Noise Levels

Less comprehensive information is available about cockpit noise levels. We identified only four
studies that measured cockpit noise levels, and while none of them reported results in the 8-
hour time-weighted average format, each of them reported average noise levels below 85
decibels. These studies used a variety of measurement techniques such as a mannequin
equipped with microphones to measure noise, a hand-held sound level meter, or pilots outfitted
with dosimeters during flight. The studies conducted measurements on several different
mainline and regional aircraft such as the Boeing 747 and 757, the Bombardier DHC-8 Q400,
and the Airbus A340 and A319.

19 Dosimeter measurements of entire flight attendant work periods could also include significant time spent not
onboard an aircraft, such as time spent waiting for a flight in the terminal. Additionally, as of the end of 2016, the 10
largest U.S. airlines did not operate any Avro RJ-85 series aircraft.

20 Airline officials told us that they measured entire flight attendant work periods, which include time spent in airport
terminals, because this more accurately reflected their true exposure to noise, and that it was not possible to isolate
just the time spent in an aircraft cabin from these measurements.

Page 7 GAO-18-109R Commercial Aviation

The four aircraft manufacturers told us that they test cockpit noise levels in each new aircraft
model and have found that levels are below 85 decibels. The airlines we spoke with told us that
they have not tested cockpit noise levels on aircraft in service, and that they do not regard
cockpit noise levels as posing a problem for pilot communications or other safety concerns.

Pilot and Flight Attendant Concerns Related to Aircraft Noise

Labor groups representing pilots and flight attendants told us that they have concerns about the
amount of noise exposure their members receive onboard aircraft; however, there have been
few noise-related complaints made by pilots and flight attendants to OSHA, FAA, and the
Aviation Safety Reporting System (ASRS). Labor groups representing flight attendants told us
that they experience especially high levels of noise exposure when working in turboprop-
powered aircraft, older aircraft, and aircraft with tail-mounted engines, such as the McDonnell
Douglas MD-80 series.21 Officials from labor groups representing pilots told us that pilots
experience high levels of noise in certain aircraft due to equipment cooling fans, the
configuration of the air conditioning system, and equipment such as windshield wipers. While
we do not consider noises from equipment malfunctions as part of the daily operations of an
aircraft, labor officials representing both groups of crewmembers told us that these
malfunctions, such as faulty door seals, can create particularly loud noises. According to airline
officials we interviewed, faulty door seals are not common, and when they occur, they are
typically repaired before the next flight.

Labor groups representing flight attendants said that while cabin noise levels are likely below
the OSHA action level, the noise exposure crewmembers do experience can result in difficulty
communicating, fatigue, and, with long-term exposure, hearing loss. Labor officials expressed
concern that OSHA’s 90-decibel permissible exposure limit, which is the sound level at which
employers take steps to reduce noise and is higher than OSHA’s action level standard, may not
be sufficient to protect crewmember health and safety. These officials cited research conducted
by NIOSH that estimated around 25 percent of the population would experience noise-induced
hearing loss over a 40-year career when exposed to that level of sound daily.22

Nonetheless, OSHA has received only two complaints of high ambient cabin noise levels since
the 2013 FAA policy statement was issued, while during the same time period it received more
than 600 complaints in the commercial passenger aviation sector in general.23 In these two
instances, OSHA conducted an informal review, in response to which the airlines provided
noise-testing data from aircraft manufacturers, documenting noise levels in an array of aircraft
flown by the airlines. The data showed that for the 16 aircraft included in the documentation,
cruise flight noise levels were below the OSHA action level. Following its review, OSHA
determined that no violation had occurred. FAA officials also told us that they have not received
any complaints during their routine meetings with labor groups representing pilots and flight

21 As of the end of 2016, the 10 largest U.S. airlines operated 238 MD-80 series aircraft.

22 NIOSH defines noise-induced hearing loss as a material hearing impairment with reductions in the hearing
threshold at certain sound frequencies of more than 25 decibels.

23 During this time, OSHA received one other complaint related to malfunctioning equipment on one specific flight.

Page 8 GAO-18-109R Commercial Aviation

attendants, and that they searched several of their safety and oversight reporting system
databases for noise-related complaints and found none submitted in the last five years.24

We also searched NASA’s ASRS database for reports submitted by aviation workers since
January 2012 that discussed aircraft noise levels interfering with crewmembers’ ability to
effectively communicate. We limited our search to these reports because FAA requires that
aircraft noise in the cockpit does not interfere with the safe operation of the aircraft and that
public address system announcements are audible by the cabin’s occupants. We found that out
of the more than 26,000 reports submitted during the period, only 10 referred to
communications difficulties caused by normal ambient noise levels.25 These reports included
complaints about difficulty hearing other crewmembers or radio transmissions, as well as
complaints about being distracted or fatigued by loud noises.

Hearing Protection Policies of Selected Airlines Vary

In general, FAA does not prescribe airline policies on crewmembers’ hearing protection, other
than if the crewmember does wear hearing protection, it must not interfere with safety-related
duties.26 In accordance with FAA’s 2013 policy statement, airlines are only required to provide
hearing protection for cabin crewmembers—but not pilots—as part of a hearing conservation
program if noise levels are in excess of the OSHA action level.27 FAA requires pilots to use
headsets when the aircraft is below 18,000 feet, but, depending on the model, these headsets
may or may not protect hearing.28

We asked eight airlines about their policies regarding hearing protection for flight attendants and
pilots. A summary of their responses is provided in table 1.

24 FAA searched the Program Tracking and Reporting Subsystem (PTRS), the Safety Assurance System (SAS), the
Air Transportation Oversight System (ATOS), and the Accident Incident Database System (A/IDS) using several
noise-related terms, such as noise, decibel, noise, noise level, loud noise, and loud sound, among others.

25 We found an additional 30 reports that discussed communications difficulties caused by noise from malfunctioning
aircraft equipment, such as a radio with excessive static or a leaking door seal.

26 14 C.F.R. § 121.135, 14 C.F.R. § 121.397.

27 Department of Transportation, Federal Aviation Administration, Occupational Safety and Health Standards for
Aircraft Cabin Crewmembers (Washington, D.C.: August 21, 2013).

28 14 C.F.R. § 121.359(g).

Page 9 GAO-18-109R Commercial Aviation

Table 1: Number of Selected Airlines That Allow or Provide Hearing Protection Equipment to Pilots and Flight
Attendants for Use on an Aircraft in Operation, Based on Interviews of Airline Officials

Employee type Policya Number of airlines (of 8)
Pilots Allow earplugs 5

Allow noise-reducing headsets 8
Provide earplugs 5
Provide active noise-reducing headsetsb 2

Flight Attendants
Allow earplugs 5c
Provide earplugs 4

Source: GAO analysis of information provided by airlines | GAO-18-109R
a In addition to providing hearing protection equipment, officials from two of the airlines said they make annual hearing tests
available to crewmembers.
b In active noise-reducing headsets, sound is measured inside the headset and an opposite phase copy of the noise is fed back into
the headset, cancelling each other.
c Officials from three of the airlines said they do not allow flight attendants to wear earplugs because they can diminish a flight
attendant’s ability to hear public address announcements. Officials from one of the airlines said that they provide and allow flight
attendants to wear earplugs and only do so for flight attendants working on certain aircraft and during noisier flight segments.

Officials from three of the labor groups we interviewed said that they believe that the number of
crewmembers who choose to use hearing protection is limited. Pilot labor groups told us that
noise-reducing headsets can be costly or uncomfortable and that in some cases the aircraft
communications systems are not compatible with such headsets.

Agency Comments

We requested comments on a draft of this product from the Department of Health and Human
Services (HHS), the Department of Labor (DOL), and the Department of Transportation (DOT).
HHS and DOL provided technical comments, which we incorporated as appropriate, and DOT
had no comments.

– – – – –

We are sending copies of this report to the appropriate congressional committees, the Secretary
of Health and Human Services, the Secretary of Labor, and the Secretary of Transportation. In
addition, the report is available at no charge on the GAO website at http://www.gao.gov.

If you or your staff have any questions about this report, please contact me at (202) 512-2834 or
dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public
Affairs may be found on the last page of this report. GAO staff who made key contributions to
this report were Heather Halliwell (Assistant Director); Anne Doré (Analyst-in-Charge); Blake
Ainsworth; Marcia Crosse; Alex Fedell; Jim Geibel; Dave Hooper; SaraAnn Moessbauer;
Pamela Snedden; Madhav Panwar; Malika Rice; and Michelle Weathers.

Sincerely yours,

Gerald L. Dillingham, Ph.D.
Director
Physical Infrastructure Issues

Enclosures – 2

http://www.gao.gov/

mailto:dillinghamg@gao.gov

Page 10 GAO-18-109R Commercial Aviation

Enclosure I: Objectives, Scope, and Methodology

This report examines: (1) what is known about aircraft cabin and cockpit noise levels compared
with occupational noise exposure standards and (2) selected airlines’ policies on hearing
protection for crewmembers.

To identify what is known about noise levels inside aircraft cabins and cockpits, we conducted a
search of government, academic, and trade literature using terms such as “aircraft,” “cabin,”
“noise,” “decibel,” “crew,” and “sound,” and asked the subjects we interviewed as part of this
engagement whether they knew of any additional studies. From these searches, we selected
176 studies for further review. We further screened these studies to identify those that had likely
conducted independent direct measurements of aircraft interior noise. We also screened the
studies for reliability using the following criteria: (1) whether the study reported sound
measurements in a useable format; (2) whether the study was conducted by an independent
party (i.e., not an airline or aircraft manufacturer); and (3) whether the study used a recognized
methodology for conducting measurements. These screening efforts yielded 10 studies. They
were from a mixture of sources including academic journals, government agencies, and industry
associations and varied in the techniques used to conducted measurements including taking
measurements with fixed instruments while sitting in passenger seats and placing wearable
dosimeters on flight attendants performing their duties.

To review the 10 studies we identified, we developed a data collection instrument designed to
examine the studies’ methodologies and major findings. Examples of study facets we examined
included the number and type of aircraft measured; the method used to take measurements
(e.g. a handheld sound level meter or a wearable dosimeter); the format used to report noise
levels (e.g. an 8-hour time-weighted average or a simple mean average of measurements
taken); the noise levels reported; and other relevant findings. (See enclosure II for a list of these
studies and a detailed summary of their findings.)

The studies we reviewed varied in terms of methodologies used, aircraft types sampled, and
format used to report results. While the studies mostly reported noise levels below occupational
noise exposure standards, this variance does not allow us to determine interior noise levels
present across the fleet of commercial aircraft operating in the United States.

To obtain information on the role of FAA and OSHA in overseeing aircraft interior noise levels,
we reviewed FAA’s and OSHA’s laws, regulations and guidance pertaining to noise exposure,
including FAA’s 2013 policy statement on applying OSHA’s occupational noise exposure
standard in aircraft cabins, the 2014 memorandum of understanding that delineated FAA’s and
OSHA’s role in implementing that policy, and FAA’s aircraft certification rules and guidance
related to interior noise. We also reviewed the occupational noise exposure standard from
OSHA and the recommended occupational noise exposure limit from NIOSH and interviewed
officials from FAA, OSHA, and NIOSH about the development, implementation, and
enforcement of those standards.

We also interviewed a range of aviation entities that have knowledge of aircraft interior noise
levels. These included officials from seven labor groups representing pilots and flight attendants
who work on commercial aircraft, two aviation trade associations, and the four largest aircraft
manufacturers. We also selected eight mainline and regional airlines that had the most
passenger enplanements in the U.S. in 2016, the most recent available data, and that ensured
that a wide range of aircraft types were included in our review. Information on noise level testing
that we collected from our interviews with airlines and aircraft manufacturers could not be
confirmed because the companies did not make the supporting documentation available to us,

Page 11 GAO-18-109R Commercial Aviation

citing its proprietary nature. Additionally, the information and perspectives that we obtained from
these interviews may not be generalized to all industry stakeholders. (See table 2).

Table 2: Federal Agencies, Airlines, Industry Groups, Labor Groups, and Aircraft Manufacturers We
Contacted or Interviewed
U.S. federal agencies
Department of Health and Human Services, National Institute for Occupational Safety and Health
Department of Labor, Occupational Safety and Health Administration
Department of Transportation, Federal Aviation Administration
U.S. mainline passenger airlines
American Airlines
Delta Airlines
Southwest Airlines
United Airlines
U.S. regional passenger airlines
ExpressJet Airlines
Horizon Air
Republic Airline
SkyWest Airlines
Industry groups
Airlines for America
Regional Airline Association
Airline labor groups
Air Line Pilots Association
Allied Pilots Association
Association of Flight Attendants
Association of Professional Flight Attendants
Coalition of Airline Pilots Associations
Independent Pilots Association
Teamsters Local 1224
Aircraft manufacturers
Airbus
Boeing
Bombardier
Embraer

Source: GAO. | GAO-18-109R

In addition, we reviewed an FAA analysis conducted in May 2017 on four of its safety and
oversight databases to identify noise-related complaints made in the previous 5 years. We also
evaluated data from the Aviation Safety Reporting System (ASRS), which is a safety database
maintained by NASA for FAA, to identify reports made by pilots and flight attendants, among
other aviation workers, of noise-related communications difficulties. We limited our search to
these reports because FAA requires that aircraft noise in the cockpit does not interfere with the
safe operation of the aircraft and that public address system announcements are audible by the
cabin’s occupants. To identify these complaints, we searched the ASRS for reports made
between January 2012 and March 2017 and identified those that referred to communications
challenges caused directly by aircraft interior noise, either from normal operations or
malfunctioning equipment. We assessed the reliability of this dataset by reviewing our previous
reliability assessments, which included reviews of documentation related to the data collection
and storage and interviews with ASRS officials, and by confirming the continued validity of these
earlier assessments by reviewing current ASRS documentation. In addition, we reviewed data

Page 12 GAO-18-109R Commercial Aviation

from the OSHA Information System on complaints made of aircraft cabin noise from August
2013, the year OSHA began receiving complaints related to aircraft interior noise, to May 2017,
and for the same time period, we also reviewed data from the OSHA Information System to
identify the total number of complaints submitted to OSHA from the passenger air transportation
sector. To determine the reliability of these data, we interviewed officials and reviewed
documentation from OSHA. From each of these sources, we excluded data that were related to
noise concerns from malfunctioning aircraft equipment because while noise from such
equipment may at times contribute to the crewmember’s overall noise exposure, it does not
represent normal operating conditions of aircraft. We also reviewed documentation related to
OSHA’s informal review of the complaints made about ambient aircraft noise. This
documentation included information submitted by airlines, such as noise measurements that
were taken by the manufacturers of their aircraft.

To identify airline policies on hearing protection for crewmembers, we asked officials from the
selected mainline and regional airlines and airline labor groups to describe the types of hearing
protection crewmembers are permitted to wear, any restrictions on their use, and other hearing
related services, such as hearing tests, that are provided to crewmembers. We also asked
these officials about the extent to which hearing protection and hearing-related services are
used and what factors contribute to their use. The information and perspectives that we
obtained from these interviews may not be generalized to all airlines or labor groups. In addition,
we were not able to confirm the information officials from airlines provided us about their policies
because not all of the airlines provided us with related documentation.

We conducted this performance audit from March 2017 to November 2017 in accordance with
generally accepted government auditing standards. Those standards require that we plan and
perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our
findings and conclusions based on our audit objectives. We believe that the evidence obtained
provides a reasonable basis for our findings and conclusions based on our audit objectives.

Page 13 GAO-18-109R Commercial Aviation

Enclosure II: Description of Studies Measuring Aircraft Interior Noise

We identified studies that reported noise levels measured inside aircraft cabins and cockpits by
conducting our own literature searches using relevant terms and by asking the subjects we
interviewed whether they knew of any additional studies. We screened the studies we identified
from these sources using criteria such as whether the study reported sound measurements,
was conducted by a neutral party, and used a recognized methodology for conducting
measurements. The resulting 10 studies were from academic journals, government agencies,
and industry associations, and varied in the techniques used to conduct measurements. For
example, such techniques included taking measurements with static instruments while sitting in
passenger seats and placing wearable dosimeters on flight attendants performing their duties.
Table 3 summarizes the results of our review.

Table 3: Results of GAO’s Review of 10 Studies That Measured Noise Levels in Aircraft Cabins and Cockpits

Study Year

Location
on aircraft
measured

Measurement
technique
used

Type of
aircraft
measured

Number
of flights
measured

Results
directly
comparable
to standards

Description
of results

1 1994 Cabin Fixed sound
level meter

Boeing 727
and 757;
McDonnell
Douglas DC-
9 and MD-80

35 No This study reported that
average cruise flight noise
levels in each of the aircraft
types measured were
between 60 and 83 decibels.

2 2002 Cockpit Acoustic
mannequin

Airbus A320;
Boeing 737-
400, 747-100,
747-200, 747-
400, 757-200,
767-300;
McDonnell
Douglas DC-
10-30; British
Aerospace
ATP;
Concorde

20 No This study reported that
noise levels, when averaged
over the entire length of
each flight, were between 70
and 77 decibels for each of
the aircraft measured.

3 2007 Cabin and
Cockpit

Fixed sound
level meter
and wearable
dosimeter

Airbus A321
and others
not reported

Not
reported

NIOSH, but
not OSHA.

This study reported that one
measurement of noise levels
during a full flight attendant
work day on an unspecified
long-haul aircraft was slightly
below 85 decibels. The
study also reported that one
measurement of a pilot’s
noise exposure on a single
flight aboard an unspecified
short-haul aircraft was below
75 decibels. This study also
reported measurements
taken in the cabin on the
A321during takeoff and
climb were as high as 92.5
decibels during takeoff in a
rear seat location.

Page 14 GAO-18-109R Commercial Aviation

Study Year
Location
on aircraft
measured
Measurement
technique
used
Type of
aircraft
measured
Number
of flights
measured
Results
directly
comparable
to standards
Description
of results

4 2012 Cockpit Fixed sound
level meter

Airbus A319
and
Bombardier
DHC-8 Q400

2 (one for
each
aircraft
type)

No This study reported that the
average noise levels inside
the two aircraft cockpits
measured were between 60
and 85 decibels for each of
the phases of flight: pre-
flight, taxi, takeoff, climb,
cruise, descent, final
approach and landing.

5 2008 Cabin and
Cockpit

Not Reported Airbus A330
and A340

6 No This study reported that
noise levels, when averaged
over the entire length of
each flight, were between 70
and 80 decibels in all areas
of the aircraft measured.

6 2004 Cabin Fixed sound
level meter

Bombardier
DHC-8 Q400,
DHC-8 Q200,
CRJ-700

18 (six for
each
aircraft
type)

No This study reported that
median noise levels inside
each aircraft type were
between 75 and 85 decibels
for each phase of flight
(takeoff, cruise, and landing)
except the rear seat position
of the CRJ-700, which was
reported at around 93
decibels on takeoff.

7 2006 Cabin Wearable
dosimeter

Avro RJ-85 20 flight
attendant
work
days, but
the exact
number of
flights was
not
reported.

OSHA and
NIOSH

This study reported that
noise levels experienced by
individual flight attendants
over full work days were not
above the OSHA action
level, but also that three of
the work days likely
exceeded the NIOSH
recommended exposure
limit.

8 2006 Cabin Fixed sound
level meter

Airbus A321 2 No This study reported that the
average noise levels in the
cabin were between 58 and
80 decibels for each of the
phases of flight: pre-flight,
taxi, takeoff, climb, cruise,
approach and landing, and
parking.

9 2012 Cabin Fixed sound
level meter

Airbus A380;
Boeing 737-
300, 737-700,
747, 767, 777

83 (at
least 5 per
aircraft
type)

No This study reported that
average noise levels in each
of the aircraft types
measured were between 67
and 76 decibels.

10 2004 Cabin Fixed sound
level meter

McDonnell
Douglas MD-
80

1 No This study reported that 5-
second average noise levels
on the individual aircraft
measured ranged from 87—
99 decibels.

Source: GAO analysis of selected studies. | GAO-18-109R

Page 15 GAO-18-109R Commercial Aviation

The following are the 10 studies we reviewed, presented in the order as they appear in table 3.

Air Transport Association of America. Airline Cabin Air Quality Study. Washington, D.C.: April,
1994.

Bagshaw, M., M.C. Lower. “Hearing Loss on the Flight Deck – Origin and Remedy.” The
Aeronautical Journal Vol. 106 Issue 1059 (2002): 277-290.

Hills, A., K. Merrie. “Plane Sounding.” The Safety and Health Practitioner. July 2007.

Ivošević, J., D. Miljković, and K. Krajček. “Comparative Interior Noise Measurements in a Large
Transport Aircraft – Turboprops vs. Turbofans.” Proceedings of the 5th Congress of Alps-Adria
Acoustics Association AIR-04 (2012): 1-6.

Mellert, V., I. Baumann, N. Freese, R. Weber. “Impact of sound and vibration on health, travel
comfort and performance of flight attendants and pilots.” Aerospace Science and Technology 12
(2008): 18-24.

NIOSH, NIOSH Health Hazard Evaluation Report: Horizon Air, HETA#2002-0354-2931.
Cincinnati, OH: February, 2004.

NIOSH, NIOSH Health Hazard Evaluation Report: Mesaba Airlines, Inc., HETA#2003-0364-
3012. Cincinnati, OH: August, 2006.

Ozcan, H.K., S. Nemlioglu. “In-cabin Noise Levels During Commercial Aircraft Flights.”
Canadian Acoustics Vol. 34 No. 4 (2006): 31-35.

Spengler, J., J. Vallarino, E. McNeely, H. Estephan, In-flight/onboard monitoring ACER’s
component for ASHRAE 1262, Part 2. National Air Transportation Center of Excellence for
Research in the Intermodal Transport Environment (RITE) Report No. RITE-ACER-CoE-2012-6.
Washington, D.C.: April, 2012.

Spicer, C., M. Murphy, M. Holdren, J. Myers, I. MacGregor, C. Holloman, R. James, K. Tucker,
R. Zaborski, Relate air quality and other factors to comfort and health symptoms reported by
passengers and crew on commercial transport aircraft (Part 1), American Society for Heating,
Refrigerating, and Air Conditioning Engineers Project No. 1262-TRP. Atlanta, GA: July, 2004.

(101927)

This is a work of the U.S. government and is not subject to copyright protection in the United States.
The published product may be reproduced and distributed in its entirety without further permission
from GAO. However, because this work may contain copyrighted images or other material,
permission from the copyright holder may be necessary if you wish to reproduce this material
separately.

The Government Accountability Office, the audit, evaluation, and investigative
arm of Congress, exists to support Congress in meeting its constitutional
responsibilities and to help improve the performance and accountability of the
federal government for the American people. GAO examines the use of public
funds; evaluates federal programs and policies; and provides analyses,
recommendations, and other assistance to help Congress make informed
oversight, policy, and funding decisions. GAO’s commitment to good government
is reflected in its core values of accountability, integrity, and reliability.

The fastest and easiest way to obtain copies of GAO documents at no cost is
through GAO’s website (http://www.gao.gov). Each weekday afternoon, GAO
posts on its website newly released reports, testimony, and correspondence. To
have GAO e-mail you a list of newly posted products, go to http://www.gao.gov
and select “E-mail Updates.”

The price of each GAO publication reflects GAO’s actual cost of production and
distribution and depends on the number of pages in the publication and whether
the publication is printed in color or black and white. Pricing and ordering
information is posted on GAO’s website, http://www.gao.gov/ordering.htm.

Place orders by calling (202) 512-6000, toll free (866) 801-7077, or
TDD (202) 512-2537.

Orders may be paid for using American Express, Discover Card, MasterCard,
Visa, check, or money order. Call for additional information.

Connect with GAO on Facebook, Flickr, Twitter, and YouTube.
Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts.
Visit GAO on the web at www.gao.gov.

Contact:

Website: http://www.gao.gov/fraudnet/fraudnet.htm
E-mail: fraudnet@gao.gov
Automated answering system: (800) 424-5454 or (202) 512-7470

Orice Williams Brown, Managing Director, WilliamsO@gao.gov, (202) 512-4400,
U.S. Government Accountability Office, 441 G Street NW, Room 7125,
Washington, DC 20548

Chuck Young, Managing Director, youngc1@gao.gov, (202) 512-4800
U.S. Government Accountability Office, 441 G Street NW, Room 7149
Washington, DC 20548

James-Christian Blockwood, Managing Director, spel@gao.gov, (202) 512-4707
U.S. Government Accountability Office, 441 G Street NW, Room 7814,
Washington, DC 20548

GAO’s Mission

Obtaining Copies of
GAO Reports and
Testimony
Order by Phone

Connect with GAO

To Report Fraud,
Waste, and Abuse in
Federal Programs

Congressional
Relations

Public Affairs

Strategic Planning and
External Liaison

Please Print on Recycled Paper.

http://www.gao.gov/

http://www.gao.gov/

http://www.gao.gov/ordering.htm

http://facebook.com/usgao

http://flickr.com/usgao

http://youtube.com/usgao

http://www.gao.gov/feeds.html

http://www.gao.gov/subscribe/index.php

http://www.gao.gov/podcast/watchdog.html

http://www.gao.gov/

http://www.gao.gov/fraudnet/fraudnet.htm

mailto:fraudnet@gao.gov

mailto:WilliamsO@gao.gov

mailto:youngc1@gao.gov

mailto:spel@gao.gov

  • CorrespondenceR
  • Ordering Information_Reports
    GAO’s Mission
    Obtaining Copies of GAO Reports and Testimony
    Order by Phone
    Connect with GAO
    To Report Fraud, Waste, and Abuse in Federal Programs
    Congressional Relations
    Public Affairs
    Strategic Planning and External Liaison

  • Ordering Information
  • GAO’s Mission
    Obtaining Copies of GAO Reports and Testimony
    Order by Phone
    Connect with GAO
    To Report Fraud, Waste, and Abuse in Federal Programs
    Congressional Relations
    Public Affairs
    Strategic Planning and External Liaison

<< /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /PageByPage /Binding /Left /CalGrayProfile (Dot Gain 20%) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /All /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness true /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Preserve /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 300 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50000 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >>
/ColorImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >>
/JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >>
/JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >>
/AntiAliasGrayImages false
/CropGrayImages true
/GrayImageMinResolution 300
/GrayImageMinResolutionPolicy /OK
/DownsampleGrayImages true
/GrayImageDownsampleType /Bicubic
/GrayImageResolution 300
/GrayImageDepth -1
/GrayImageMinDownsampleDepth 2
/GrayImageDownsampleThreshold 1.50000
/EncodeGrayImages true
/GrayImageFilter /DCTEncode
/AutoFilterGrayImages true
/GrayImageAutoFilterStrategy /JPEG
/GrayACSImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >>
/GrayImageDict << /QFactor 0.15 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >>
/JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >>
/JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >>
/AntiAliasMonoImages false
/CropMonoImages true
/MonoImageMinResolution 1200
/MonoImageMinResolutionPolicy /OK
/DownsampleMonoImages true
/MonoImageDownsampleType /Bicubic
/MonoImageResolution 1200
/MonoImageDepth -1
/MonoImageDownsampleThreshold 1.50000
/EncodeMonoImages true
/MonoImageFilter /CCITTFaxEncode
/MonoImageDict << /K -1 >>
/AllowPSXObjects false
/CheckCompliance [
/None
]
/PDFX1aCheck false
/PDFX3Check false
/PDFXCompliantPDFOnly false
/PDFXNoTrimBoxError true
/PDFXTrimBoxToMediaBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXSetBleedBoxToMediaBox true
/PDFXBleedBoxToTrimBoxOffset [
0.00000
0.00000
0.00000
0.00000
]
/PDFXOutputIntentProfile (None)
/PDFXOutputConditionIdentifier ()
/PDFXOutputCondition ()
/PDFXRegistryName ()
/PDFXTrapped /False
/CreateJDFFile false
/Description << /ARA
/BGR
/CHS
/CHT
/CZE
/DAN
/DEU
/ESP
/ETI
/FRA
/GRE
/HEB
/HRV (Za stvaranje Adobe PDF dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.)
/HUN
/ITA
/JPN
/KOR
/LTH
/LVI
/NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.)
/NOR
/POL
/PTB
/RUM
/RUS
/SKY
/SLV
/SUO
/SVE
/TUR
/UKR
/ENU (Use these settings to create Adobe PDF documents best suited for high-quality prepress printing. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)
>>
/Namespace [
(Adobe)
(Common)
(1.0)
]
/OtherNamespaces [
<< /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >>
<< /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /ConvertColors /ConvertToCMYK /DestinationProfileName () /DestinationProfileSelector /DocumentCMYK /Downsample16BitImages true /FlattenerPreset << /PresetSelector /MediumResolution >>
/FormElements false
/GenerateStructure false
/IncludeBookmarks false
/IncludeHyperlinks false
/IncludeInteractive false
/IncludeLayers false
/IncludeProfiles false
/MultimediaHandling /UseObjectSettings
/Namespace [
(Adobe)
(CreativeSuite)
(2.0)
]
/PDFXOutputIntentProfileSelector /DocumentCMYK
/PreserveEditing true
/UntaggedCMYKHandling /LeaveUntagged
/UntaggedRGBHandling /UseDocumentProfile
/UseDocumentBleed false
>>
]
>> setdistillerparams
<< /HWResolution [2400 2400] /PageSize [612.000 792.000] >> setpagedevice

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP