Cloud Computing and IT Services – APA – 6 pages – Due 17 Oct – 4 References

Requirements: 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

1) APA 6th Ed format

2 ) Due 17 October 

3) 6 Pages (not including title page and references)

4) Minumum 4 References

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

5) Plagiarism-Free

Background:

The Assignment for this module involves thinking systematically about the IT services movement in general and about cloud computing as a particular instance of that approach.   IT services is a general philosophy of organization of IT management, while cloud computing is a rather specific set of solutions to some not terribly well-defined problems; solutions that remove from the organization both costs and responsibilities. Often, cloud computing appears to be a solution in search of a problem. The reason why we’ve chosen to combine them here is because, as you have noted from the sources, the proponents of cloud computing often herald it as a particularly strong instance of the IT services approach. Cloud computing removes the responsibility of managing all that technology from a particular firm. Of course, it also removes from the organization control of its information future and even of its basic data in many cases; it also can cost quite a lot of money which disappears into the services sector rather than into the hard assets category. While clearly confiding your IT management to cloud-based providers reduces the influence of technologists relative to that of information users, it is considerably less certain that users are always better served in the cloud then they would be if they kept their old techno-geek folks still chained in the basement to the old mainframe. 

Assignment:

Your task is now to try to resolve this question at least in part. When you have read through the articles and related material, please compose a 4- to 6-page critical analysis paper, following the general point/counterpoint model described below, on the topic:

“Cloud computing and IT Services”

  • Provide a discussion on the benefits and issues of cloud computing. When should a firm purchase its IT services from the Cloud? What is the implication for  IT oversight and the firm’s governance of those systems? If the firm moves to an Open Source environment, how will its data be secured and strategic advantage gained over the competition?

You Will Be Particularly Assessed On

  • Your informed commentary and analysis—simply repeating what your sources say does not constitute an adequate paper.
  • Your ability to apply the professional language and terminology of IT systems and services correctly and in context; you are expected to be familiar with this language and use it appropriately.
  • Use at least the references included below:

 Navavati, M., Colp, P., Aiello, B., & Warefield, A. (2014). Cloud security: A gathering storm. Communications of the ACM.  57 (5), 70–79. 

 Galup, S. D., Dattero, D., Quan, J., & Conger, S. (2009). An overview of IT service management. Communications of the ACM, 52(5). 124–127. [ACM digital library] 

 Katzan, H. (2010). On an ontological view of cloud computing. Journal of Service Science, 3(1), 1–6. [ProQuest] 

 Chang, V., Bacigalupo, D., Wills, G., & Roure, D. E. (2010).

Categorization of cloud computing business models

.  

70 COMMUNICATIONS OF THE ACM | MAY 2014 | VOL. 57 | NO. 5

contributed articles

Cloud
Security:
A Gathering
Storm

DOI:10.1145/2593686

Users’ trust in cloud systems is
undermined by the lack of transparency
in existing security policies.

BY MIHIR NANAVATI, PATRICK COLP,
BILL AIELLO, AND ANDREW WARFIELD

“Doesn’t matter,” says Andrea. “The
bonus more than covers it, and we’ll
still come out ahead. There’s a lot of
money on the table.”

“What about power? Cooling? And
how soon can we get our hands on the
machines? It’s Friday. Let’s not lose the
weekend.”

“Actually,” says Andrea, “Why don’t
we just rent the machines through the
cloud? We’d have things up and run-
ning in a couple of hours.”

“Good idea! Get Sam on it, and I’ll
run it by security.”

An hour and hundreds of clicks lat-
er, Transmogrifica has more than 100
nodes across North America, each cost-
ing less than a dollar per hour. Teams
are already busy setting up their soft-
ware stack, awaiting a green light to
start work with live data.

Cloud computing has fundamen-
tally changed the way people view com-
puting resources; rather than being an
important capital consideration, they
can be treated as a utility, like power
and water, to be tapped as needed.
Offloading computation to large, cen-
tralized providers gives users the flex-
ibility to scale available resources with
changing demands, while economies
of scale allow operators to provide
required infrastructure at lower cost
than most individual users hosting
their own servers.

The benefits of such “utilification”36
extend well beyond the cost of underly-
ing infrastructure; cloud providers can
afford dedicated security and reliabil-
ity teams with expertise far beyond the
reach of an average enterprise. From a
security perspective, providers can also

FRIDAY, 15:21. Transmogrifica headquarters, Palo Alto…
News has just come in that Transmogrifica has won

a major contract from Petrolica to model oil and gas
reserves in Gulf of Mexico. Hefty bonuses are on offer
if the work is completed ahead of schedule.

Two people are seated in the boardroom.
“Let’s order 150 machines right away to speed things

along,” says Andrea.
“Too expensive,” says Robin. “And what will we do

with all the machines when we’re done? We’ll be over-
provisioned.”

key insights
“Utilification” of computing delivers

benefits in terms of cost, availability,
and management overhead.

Shared infrastructure opens questions as
to the best defenses to use against new
and poorly understood attack vectors.

Lack of transparency concerning
cloud providers’ security efforts and
governmental surveillance programs
complicates reasoning about security.

http://dx.doi.org/10.1145/2593686

MAY 2014 | VOL. 57 | NO. 5 | COMMUNICATIONS OF THE ACM 71

volve a significant benefit; since virtual
machines are analogous to physical
machines, administrators can move
existing in-house server workloads
in their entirety, or the full software
stack, dependencies and all, to the
cloud, with little or no modification.

Virtualization has also proved itself
an excellent match for various trends
in computer hardware over the past
decade; for example, increasingly par-
allel, multicore systems can be parti-
tioned into a number of single- or dual-
core virtual machines, so hardware can
be shared across multiple users while
maintaining isolation boundaries by
allowing each virtual machine access
to only a dedicated set of processors.

Multiplexing several virtual ma-
chines onto a single physical host
allows cloud operators to provide
low-cost leased computing to users.
However, such convenience comes at
a price, as users must now trust the
provider to “get it right” and are largely
helpless in the face of provider failures.

benefit from scale, as they can collect
large quantities of data and perform
analytics to detect intrusions and other
abnormalities not easily spotted at the
level of individual systems.

The value of such centralized de-
ployment is evident from its rapid up-
take in industry; for example, Netflix
migrated significant parts of its man-
agement and encoding infrastructure
to Amazon Web Services,12 and Drop-
box relies on Amazon’s Simple Stor-
age Service to store users’ data.11 Cloud
desktop services (such as OnLive Desk-
top) have also helped users augment
thin clients like iPads and Chrome-
books with access to remote worksta-
tions in data centers.

Virtualization is at the forefront of
this shift to cloud-hosted services—a
technique for machine consolidation
that helps co-locate multiple appli-
cation servers on the same physical
machine. Developed in the 1960s and
rediscovered in earnest over the past
decade, virtualization has struck a bal-

ance between the organizational need
to provision and administer software at
the granularity of a whole machine and
the operational desire to use expensive
datacenter resources as efficiently as
possible. Virtualization cleanly decou-
ples the administration of hosted soft-
ware from that of the underlying physi-
cal hardware, allowing customers to
provision servers quickly and account-
ably and providers to service and scale
their datacenter hardware without af-
fecting hosted applications.

Achieving a full range of features in a
virtualization platform requires many
software components. A key one is the
hypervisor, a special class of operating
systems that hosts virtual machines.
While conventional OSes present sys-
tem- and library-level interfaces to run
multiple simultaneous applications,
a hypervisor presents a hardware-like
interface that allows simultaneous ex-
ecution of many entire OS instances at
the same time. The coarser granularity
sandboxes provided by hypervisors in-

contributed articles

72 COMMUNICATIONS OF THE ACM | MAY 2014 | VOL. 57 | NO. 5

In traditional nonvirtualized envi-
ronments, securing systems involves
patching and securing the OS kernel,
often on a weekly basis. However, vir-
tualized environments expose a larger
attack surface than conventional non-
virtualized environments; even fully
patched and secured systems may be
compromised due to vulnerabilities
in the virtualization platform while si-
multaneously remaining vulnerable to
all attacks possible on nonvirtualized
systems. OS bugs have been exploited
to allow attackers to break process iso-
lation and compromise entire systems,
while virtualization-platform bugs risk
exposing an opportunity for attackers
within one virtual machine to gain ac-
cess to virtual machines belonging to
other customers. Exploits at either of
these layers endanger both private data
and application execution for users of
virtual machines.

Worse, virtualization exposes users
to the types of attacks typically absent
in nonvirtualized environments. Even
without a compromise of the virtu-
alization platform, shared hardware
could store sensitive state that is inad-
vertently revealed during side-channel
attacks. Despite attempts by virtualiza-
tion platforms to isolate hardware re-
sources, isolation is far from complete;
while each virtual machine may have
access to only a subset of the proces-
sors and physical memory of the sys-
tem, caches and buses are often still
shared. Attacks that leak encryption
keys and other sensitive data across in-
dependent virtual machines via shared
caches are being explored.26,4

0

Modern cloud deployments require
an unprecedented degree of trust on
the part of users, in terms of both the
intention and competence of service
providers. Cloud providers, for their
part, offer little transparency or reason
for users to believe their trust is well
placed; for example, Amazon’s secu-
rity whitepapers say simply that EC2
relies on a highly customized version
of the Xen virtualization platform to
provide instance isolation.6 While sev-
eral techniques to harden virtualized
deployments are available, it is unclear
which, if any, are being used by large
cloud service providers.

Critical systems and sensitive data
are not exclusive to cloud computing.
Financial, medical, and legal systems

While arguable that such reliance is
like relying on third parties for other
infrastructure (such as power and net-
work connectivity), there is one crucial
difference: Users rely on the provider
for both availability of resources and
the confidentiality of their data, mak-
ing critical both the security and the
availability of the systems.

Despite the best effort of cloud
providers, unexpected power outages,
hardware failures, and software mis-
configurations have caused several
high-profile incidents2–4 affecting the
availability of large-scale Internet ser-

vices, including Foursquare, Heroku,
Netflix, and Reddit. Unlike outages,
however, security exploits are not ob-
vious from the outside and could go
undetected for a long time. While the
broad reporting of failures and outages
is a strong incentive for providers to
give clear explanations, there is little
incentive to disclose compromises of
their systems to their users. Moreover,
cloud providers are legally bound to
cooperate with law-enforcement agen-
cies in some jurisdictions and may be
compelled, often in secrecy, to reveal
more about their users’ activities than
is commonly acknowledged.

In virtualized environments, misbe-
having “tenants” on a given machine
can try to compromise one another or
the virtualization platform itself. As
the lowest software layer, responsible
for isolating hosted virtual machines
and protecting against such attacks,
the virtualization platform is the un-
derlying trusted layer in virtualized de-
ployments. The trust customers place
in the security and stability of hosting
platforms is, to a large degree, trust in
the correctness of the virtualization
platform itself.

Figure 1. Example TCB of a virtualization
platform.

VM A

Hypervisor

Control VM
aka Domain 0

VM B

Virtualization Platform

Administrative Tools

Device Drivers

Device Emulation

Figure 2. TCB size for different virtualization platforms, from Nova.29 The Linux kernel size
is a minimal system install, calculated by removing all unused device drivers, file systems,
and network support.

K
L

O
C

0

100

200

300

400

500

600

4800

4900

5000

5100 Hypervisor

Linux

Qemu

KVM

Windows

ESXi Linux Xen KVM Hyper-V

contributed articles

MAY 2014 | VOL. 57 | NO. 5 | COMMUNICATIONS OF THE ACM 73

have long required practitioners com-
ply with licensing and regulatory re-
quirements, as well as strict auditing to
help assess damage in case of failure.
Similarly, aircraft (and, more recently,
car) manufacturers have been required
to include black boxes to collect data
for later investigation in case of mal-
functions. Cloud providers have begun
wooing customers with enhanced se-
curity and compliance certifications,
underlining the increasing need for so-
lutions for secure cloud computation.5
The rest of this article focuses on a key
underpinning of the cloud—the virtu-
alization platform—discussing some
of the technical challenges and recent
progress in achieving trustworthy host-
ing environments.

Meanwhile in Palo Alto…
Friday, 15:47. Transmogrifica head-
quarters, Palo Alto…

An executive enters the boardroom
where Robin is already seated.

“Hello, Robin. I hear celebrations
are in order. How much time do we
have?”

“Hey Sasha, just who I was looking
for,” Robin says. “It’s going to be tight.
Andrea was just here, and we thought
we’d buy virtual machines in the cloud
to speed things up. Anything security
would be unhappy about?”

“Well…,” says Sasha, “it isn’t as se-
cure as in-house. We could be shar-
ing the system with anyone. Literally
anyone—who might love for us to fail.
Xanadu, for instance, which is sore
about not getting the contract? It’s un-
likely, but it could have nodes on the
same hosts we do and start attacking
us.”

“What would it be able to do?,” says
Robin.

“In theory, nothing. The hypervisor
is supposed to protect against all such
attacks. And these guys take their se-
curity seriously; they also have a good
record. Can’t think of anything off-
hand, but it’s frustrating how opaque
everything is. We barely know what
system it’s running or if it’s hardened
in any way. Also, we’re completely in
the dark about the rest of the provid-
er’s security process. Makes it really
difficult to recommend anything one
way or the other.”

“That’s annoying. Anything else I
need to know?”

“Nothing I can think of, though let
me think it through a bit more.”

Trusted Computing Base
The set of hardware and software
components a system’s security de-
pends on is called the system’s trusted
computing base, or TCB. Proponents
of virtualization have argued for the
security of hypervisors through the
“small is secure” argument; hyper-
visors present a tiny attack surface
so must have few bugs and be se-
cure.23,32,33 Unfortunately, it ignores
the reality that TCB actually contains
not just the hypervisor but the entire
virtualization platform.

Note the subtle but crucial distinc-
tion between “hypervisor” and “virtu-
alization platform.” Architecturally,
hypervisors form the base of the virtu-
alization platform, responsible for at
least providing CPU multiplexing and
memory isolation and management.
Virtualization platforms as a whole
also provide the other functionality
needed to host virtual machines, in-
cluding device drivers to interface with
physical hardware, device emulation
to expose virtual devices to VMs, and
control toolstack to actuate and man-
age VMs. Some enterprise virtualiza-
tion platforms (such as Hyper-V and
Xen) rely on a full-fledged commodity
OS running with special privileges for
the functionality, making both the hy-
pervisor and the commodity OS part
of the TCB (see Figure 1). Other virtu-
alization platforms, most notably KVM
and VMware ESXi, include all required
functionality within the hypervisor
itself. KVM is an extension to a full-
fledged Linux installation, and ESXi is
a dedicated virtualization kernel that
includes device drivers. In each case,
this additional functionality means the
hypervisor is significantly larger than
the hypervisor component of either
Hyper-V of Xen. Regardless of the exact
architecture of the virtualization plat-
form, it must be trusted in its entirety.

Figure 2 makes it clear that even the
smallest of the virtual platforms, ESXi,a
is comparable in size to a stock Linux

a In 2009, Microsoft released a stripped-down
version of Windows Server 2008 called Server
Core23 for virtualized deployments; while fig-
ures concerning its size are still not available
to us, we do not anticipate the virtualization
platform being significantly smaller than ESXi.

kernel (200K LOC vs. 300K LOC). Given
that Linux has seen several privilege-
escalation exploits over the years, jus-
tifying the security of the virtualization
platform strictly as a function of the
size of the TCB fails to hold up.

A survey of existing attacks on virtu-
alization platforms20,27,37,38 reveals they,
like other large software systems, are
susceptible to exploits due to security
vulnerabilities; the sidebar “Anatomy
of an Attack” describes how an attack-
er can chain several existing vulner-
abilities together into a privilege esca-
lation exploit and bypass the isolation
between virtual machines provided by
the hypervisor.

Reduce Trusted Code?
One major concern with existing vir-
tualization platforms is the size of the
TCB. Some systems reduce TCB size
by “de-privileging” the commodity OS
component; for example, driver-spe-
cific domains14 host device drivers in
isolated virtual machines, removing
them from the TCB. Similarly, stub do-
mains30 remove the device emulation
stack from the TCB. Other approaches
completely remove the commodity OS
from the system’s TCB,10,24 effectively
making the hypervisor the only per-
sistently executing component of the
provider’s software stack a user needs
to trust. The system’s TCB becomes a
single, well-vetted component with sig-
nificantly fewer moving parts.

Boot code is one of the most com-
plex and privileged pieces of software.
Not only is it error prone it is also not
used for much processing once the sys-
tem has booted. Many legacy devices
commodity OSes support (such as the
ISA bus and serial ports) are not rel-
evant in multi-tenant deployments like
cloud computing. Modifying the de-
vice-emulation stack to eliminate this
complex, privileged boot-time code25
once it has executed significantly re-
duces the size of the TCB, resulting in
a more trustworthy platform.

Prior to the 2006 introduction of
hardware support for virtualization,
all subsystems had to be virtualized
entirely through software. Virtualizing
the processor requires modification of
any hosted OS, either statically before
booting or dynamically through an on-
the-fly process called “binary transla-
tion.” When a virtual machine is cre-

contributed articles

74 COMMUNICATIONS OF THE ACM | MAY 2014 | VOL. 57 | NO. 5

from additional device compatibility or
remove the entire OS and sacrifice de-
vice compatibility by requiring hyper-
visor-specific drivers for every device.29

No matter how small the TCB is
made, sharing hardware requires a
software component to mandate ac-
cess to the shared hardware. Being
both complex and highly privileged,
this software is a real concern for the
security of the system, an observation
that begs the question whether it is
really necessary to share hardware re-
sources at all.

Researchers have argued that a stat-
ic partitioning of system resources can
eliminate the virtualization platform
from the TCB altogether.18 The virtual-
ization platform traditionally orches-
trates the booting of the system, mul-
tiplexing the virtual resources exposed
to virtual machines onto the available
physical resources. However, static
partitioning obviates the need for such
multiplexing in exchange for a loss of
flexibility in resource allocation. Parti-
tioning physical CPUs and memory is
relatively straightforward; each virtual
machine is assigned a fixed number
of CPU cores and a dedicated region
of memory that is isolated using the
hardware support for virtualizing page
tables. Devices (such as network cards
and hard disks) pose an even greater
challenge since it is not reasonable to
dedicate an entire device for each vir-
tual machine.

Fortunately, hardware virtualization
support is not limited to processors,
recently making inroads into devices
themselves. Single-root I/O virtualiza-
tion (SR-IOV)21 enables a single physi-
cal device to expose multiple virtual de-
vices, each indistinguishable from the
original physical device. Each such vir-
tual device can be allocated to a virtual
machine, with direct access to the de-
vice. All the multiplexing between the
virtual devices is performed entirely in
hardware. Network interfaces that sup-
port SR-IOV are increasingly popular,
with storage controllers likely to fol-
low suit. However, while moving func-
tionality to hardware does reduce the
amount of code to be trusted, there is
no guarantee the hardware is immune
to vulnerability or compromise.

Eliminating the hypervisor, while
attractive in terms of security, sacri-
fices several benefits that make virtual-

ated, binary translation modifies the
instruction stream of the entire OS to
be virtualized, then executes this modi-
fied code rather than the original OS
code.

Virtualizing memory requires a
complex page-table scheme called
“shadow page tables,”7 an expansive,
extremely complicated process re-
quiring the hypervisor maintain page
tables for each process in a hosted
virtual machine. It also must monitor
any modifications to these page tables
to ensure isolation between different
virtual machines. Advances in proces-
sor technology render this functional-
ity moot by virtualizing both processor

and memory directly in hardware.
Some systems further reduce the

size of the TCB by splitting the func-
tionality of the virtualization platform
between a simple, low-level, system-
wide hypervisor, responsible for isola-
tion and security, and more complex,
per-tenant hypervisors responsible for
the remaining functionality of conven-
tional virtualization platforms.29,35 By
reducing the shared surface between
multiple VMs, such architectures help
protect against cross-tenant attacks.
In such systems, removing a large
commodity OS from the TCB presents
an unenviable trade-off; systems can
either retain the entire OS and benefit

Not all discovered vulnerabilities are exploitable; in fact, most exploits rely on chaining
together multiple vulnerabilities. In 2009, Kostya Korchinsky of Immunity Inc.,
presented an attack that gave an administrator within a virtual machine running on a
VMware hypervisor access to a physical host.20

This is notable for two reasons: It affected the entire family of VMware products,
so both Workstation and ESX server were vulnerable, and it was reliable enough that
Canvas, Immunity’s commercially available penetration testing tool, included a
“cloudburst” mode to exploit systems and deploy different payloads. Rather than remain
an esoteric proof of concept, it was indeed a commercial exploit available to anyone.

The virtualization platform exposes virtual devices to guest machines through
device emulation. The device emulation layer runs as a user-mode process within
the host, acting as a translation and multiplexing layer between virtual and physical
devices. Cloudburst exploited multiple vulnerabilities in the emulated video card
interface to allow the guest arbitrary read-and-write access to host memory, giving it the
ability to corrupt random regions of memory.

The emulated video card accepts requests from the guest virtual machine through
a FIFO command queue and responds to these requests by updating a virtual frame
buffer. Both the queue and the frame buffer reside in the address space of the
emulation process on the host (vmware-vmx) but are shared with the video driver
in the guest. The rest of the process’s address space is private and should remain
inaccessible to the guest at all times.

SVGA_CMD_RECT_COPY is an example of a request issued by the driver to the
emulator, specifying the (X, Y) coordinates and dimensions of a rectangle to be copied
along with the (X, Y) coordinates of the destination. The emulated device responds
by copying the appropriate regions, indexed relative to the start of the frame buffer.
However, due to incorrect boundary checking, the device is able to supply an extremely
large or even negative X or Y coordinate and read data from arbitrary regions of the
process’s address space. Unfortunately, due to stricter bounds checking around the
destination coordinates, arbitrary regions of process memory cannot be written to.

Emulating 3D operations requires the emulated device maintain some device state
or contexts. The contexts are stored as an array within the process but are not shared
with the guest, which requests updates to the contexts through the command queue.
The SVGA_CMD_SETRENDERSTATE command takes an index into the context array
and a value to be written at that location but does not perform bounds checking on
the value of the index, effectively allowing the guest to write to any region of process
memory, relative to the context array. This relative write can be further extended by
exploiting the SVGA_CMD_SETLIGHTENABLED command that reads a pointer from
a fixed location within the context and writes the requested value to the memory the
pointer references. These two vulnerabilities can be chained to achieve arbitrary
memory writes; as the referenced pointer lies within the context array, it is easily
modified by exploiting the SETRENDERSTATE vulnerability.

When arbitrary reads and writes are possible, shell-code can be written into process
memory, then triggered by modifying a function pointer to reference the shell-code. As
no-execute protection prevents injected shell-code from being executed, the function
pointer must first call the appropriate memory protection functions to mark these
regions of memory as executable code pages; when this is done, however, the exploit
proceeds normally.

Anatomy of an Attack

contributed articles

MAY 2014 | VOL. 57 | NO. 5 | COMMUNICATIONS OF THE ACM 75

This single
administrative
toolstack is an
artifact of the way
hypervisors have
been designed
rather than a
fundamental
limitation of
hypervisors
themselves.

ization attractive to cloud computing.
Statically partitioning resources affects
the efficiency and utilization of the sys-
tem, as cloud providers are no longer
able to multiplex several virtual ma-
chines onto a single set of physical re-
sources. As trusted platforms beneath
OSes, hypervisors are conveniently
placed to interpose on memory and de-
vice requests, a facility often leveraged
to achieve promised levels of security
and availability.

Live migration9 involves moving
a running virtual machine from one
physical host to another without inter-
rupting its execution. Primarily used
for maintenance and load balanc-
ing, it allows providers to seamlessly
change virtual to physical placements
to better balance workloads or simply
free up a physical host for hardware or
software upgrades. Both live-migration
and fault-tolerant solutions rely on the
ability of the hypervisor to continually
monitor a virtual machine’s memory
accesses and mirror them to another
host. Interposing on memory accesses
also allows hypervisors to “dedupli-
cate,” or remove redundant copies,
and compress memory pages across
virtual machines. Supporting several
key features of cloud computing, virtu-
alization will likely be seen in cloud de-
ployments for the foreseeable future.

Small Enough?
Arguments for trusting the virtualiza-
tion platform often focus on TCB size;
as a result, TCB reduction continues to
be an active area of research. While sig-
nificant progress—from shrinking the
hypervisor to isolating and removing
other core services of the platform—
has been made, in the absence of full
hardware virtualization support for ev-
ery device, the TCB will never be com-
pletely empty.

At what point is the TCB “small
enough” to be considered secure?
Formal verification is a technique to
mathematically prove the “correct-
ness” of a piece of code by comparing
implementation with a correspond-
ing specification of expected behav-
ior. Although capable of guaranteeing
an absence of programming errors,
it does only that; while proving the
realization of a system conforms to a
given specification, it does not prove
the security of the specification or the

system in any way. Or to borrow one
practitioner’s only somewhat tongue-
in-cheek observation: It “…only shows
that every fault in the specification
has been precisely implemented in
the system.”31 Moreover, formal veri-
fication quickly becomes intractable
for large pieces of code. While it has
proved applicable to some microker-
nels,19 and despite ongoing efforts
to formally verify Hyper-V,22 no virtu-
alization platform has been shrunk
enough to be formally verified.

Software exploits usually lever-
age existing bugs to modify the flow
of execution and cause the program
to perform an unauthorized action.
In code-injection exploits, attackers
typically add code to be executed via
vulnerable buffers. Hardware security
features help mitigate such attacks by
preventing execution of injected code;
for example, the no-execute (NX) bit
helps segregate regions of memory
into code and data sections, disallow-
ing execution of instructions resident
in data regions, while supervisor mode
execution protection (SMEP) prevents
transferring execution to regions of
memory controlled by unprivileged, us-
er-mode processes while executing in a
privileged context. Another class of at-
tacks called “return-oriented program-
ming”28 leverages code already present
in the system rather than adding any
new code and is not affected by these
security enhancements. Such attacks
rely on small snippets of existing code,
or “gadgets,” that immediately precede
a return instruction. By controlling the
call stack, the attacker can cause execu-
tion to jump between the gadgets as de-
sired. Since all executed code is original
read-only system code, neither NX nor
SMEP are able to prevent the attack.
While such exploits seem cumbersome
and impractical, techniques are avail-
able to automate the process.17

Regardless of methodology, most
exploits rely on redirecting execu-
tion flow in an unexpected and un-
desirable way. Control-flow integrity
(CFI) prevents such an attack by en-
suring the program jumps only to
predefined, well-known locations
(such as functions, loops, and con-
ditionals). Similarly, returns are
able to return execution only to valid
function-call sites. This protection is
typically achieved by inserting guard

contributed articles

76 COMMUNICATIONS OF THE ACM | MAY 2014 | VOL. 57 | NO. 5

While providers
have no incentive
to undermine their
users’ operations
(their business
indeed depends
on maintaining
user satisfaction),
the carelessness
or maliciousness
of a single,
well-placed
administrator
could compromise
the security of
an entire system.

lines are evicted; the attacker deduces
the execution pattern code based on
the evicted cache lines and is able to
extract the victim’s cryptographic key.
Moreover, combining such attacks with
techniques to exploit cloud placement
algorithms26 could allow attackers to
identify victims precisely, arrange to
forcibly co-locate virtual machines, and
extract sensitive data from them.

Modern hypervisors are helpless to
prevent them, as they have no way to
partition or isolate the caches, which
are often shared between cores on the
same processor on modern architec-
tures. While researchers have pro-
posed techniques to mitigate timing
attacks (such as randomly delaying
requests, adjusting the virtual ma-
chine’s perception of time and add-
ing enough noise to the computation
to prevent information leakage), no
low-overhead practically deployable
solutions are available. Such mitiga-
tion techniques remain an active area
of research.

What to Do?
The resurgence of hypervisors is a
relatively recent phenomenon, with
significant security advances in only a
few years. However, they are extremely
complex pieces of software, and writ-
ing a completely bug-free hypervisor
is daunting, if not impossible; vulner-
abilities will therefore continue to exist
and be exploited.

Assuming any given system will
eventually be exploited, what can we
do? Recovering from an exploit is so
fraught with risk (overlooking even a
single backdoor can lead to re-com-
promise) it usually entails restoring
the system from a known good backup.
Any changes since this last backup are
lost. However, before recovery can be-
gin, the exploit must first be detected.
Any delay toward such detection repre-
sents a window of opportunity for an
attacker to monitor or manipulate the
entire system.

Comprehensive logging and audit-
ing techniques are required in several
application domains, especially for
complying with many of the standards
cloud providers aim to guarantee.5
Broadly speaking, such audit trails
have helped uncover corporate impro-
priety, financial fraud, and even piece
together causes of tragic accidents.

conditions in the code to validate any
control-flow transfer instructions.1,13

Software-based CFI implementa-
tions typically rely on more privileged
components to ensure the enforce-
ment mechanism itself is not disabled
or tampered with; for example, the
kernel can prevent user-space applica-
tions from accessing and bypassing the
inserted guard conditions. However,
shepherding the execution of hypervi-
sors through CFI is more of a challenge;
as the hypervisor is the most privileged
software component, there is noth-
ing to prevent it from modifying the
enforcement engine. A possible work-
around34 is to mark all memory as read-
only, even to the hypervisor, and fault
on any attempted modification. Such
modification is verified while handling
the fault, and, though benign updates
to nonsensitive pages are allowed, any
attempt to modify the enforcement
engine is blocked. Despite the difficul-
ties, monitoring control flow is one of
the most comprehensive techniques to
counter code-injection exploits.

Shared Hardware Resources
A hypervisor provides strong isola-
tion guarantees between virtual ma-
chines, preventing information leak-
age between them. Such guarantees
are critical for cloud computing; their
absence would spell the end for pub-
lic-cloud deployments. The need for
strong isolation is typically balanced
against another operational require-
ment, that providers share hardware
resources between virtual machines
to provide services at the scale and
cost users demand.

Side-channel attacks bypass isola-
tion boundaries by ignoring the soft-
ware stack and deriving information
from shared hardware resources; for
example, timing attacks infer certain
system properties by measuring the
variance in time taken for the same
operation across several executions
under varying circumstances. Timing
attacks on shared instruction caches
have allowed attackers to extract cryp-
tographic keys from a co-located vic-
tim’s virtual machine.40

These attacks are conceptually sim-
ple: The attacker fills up the i-cache,
then waits for the victim to run. The
exact execution within the victim’s vir-
tual machine determines which cache

contributed articles

MAY 2014 | VOL. 57 | NO. 5 | COMMUNICATIONS OF THE ACM 77

For cloud computing, such logs can
help identify exactly how and when the
system was compromised and what re-
sources were affected.

Tracking information flows be-
tween virtual machines and the man-
agement tool stack allows logging
unauthorized use of highly privileged
administrative tools.15 Not only is such
use tracked, the specifics of the inter-
action are recorded for future audit. If
a virtual machine’s memory is read, the
log stores the exact regions of accessed
memory, along with their contents. Us-
ers can then assess the effects of the ac-
cesses and resolve them appropriately;
for instance, if regions corresponding
to a password or encryption keys are
read, users can change the password or
encryption keys before incurring any
further damage.

Beyond this, advanced recovery so-
lutions can help recover quickly from
security breaches and minimize data
loss. Built on top of custom logging en-
gines,16 they provide analytics to clas-
sify actions as either tainted or non-
tainted. Recovery is now much more
fine grain; by undoing all effects of
only the tainted actions, an attack can
be reversed without losing all useful
changes since the last backup. Alter-
natively, during recovery, all actions,
including the attack, are performed
against a patched version of the sys-
tem. The attack will now fail, while
useful changes are restored.

Back at Transmogrifica…
Friday, 16:35. Transmogrifica head-
quarters, Palo Alto…

Sasha enters the boardroom where
Robin and Andrea are already seated.

“Robin, Andrea, how confidential is
this Petrolica data we’ll be processing?”

“Well,” says Robin, glancing toward
Andrea, “Obviously, it’s private data,
and we don’t want anyone to have ac-
cess to it. But it isn’t medical- or legal-
records-level sensitive, if that’s what
you’re getting at. Why do you ask?”

“I hope you realize anyone with
sufficient privileges in the cloud pro-
vider could read or modify all the
data. The provider controls the entire
stack. There’s absolutely nothing we
can do about it. Worse, we wouldn’t
even know it happened. Obviously I’m
not suggesting it would happen. I just
don’t know the extent of our liability.”

“Shouldn’t the provider’s SOC
compliance ensure it’s got steps in
place to prevent that from happen-
ing?,” says Andrea before Robin could
respond. “Anyhow, I’ll run it by legal
and see how unhappy they are. We
should probably be fine for now, but
it’s worth keeping in mind for any oth-
er projects.”

Watching the Watchers
Isolating virtual machines in co-tenant
deployments relies on the underlying
hypervisor. While securing the hyper-
visor against external attacks is indeed
vital to security, it is not the only vec-
tor for a determined attacker. Today’s
hypervisors run a single management
stack, controlled by a cloud provider.
Capable of provisioning and destroy-
ing virtual machines, the management
toolstack can also read the memory
and disk content of every virtual ma-
chine, making it an attractive target
for compromising the entire system.

This single administrative tool-
stack is an artifact of the way hypervi-
sors have been designed rather than
a fundamental limitation of hyper-
visors themselves. While providers
have no incentive to undermine their
users’ operations (their business in-
deed depends on maintaining user
satisfaction), the carelessness or ma-
liciousness of a single, well-placed ad-
ministrator could compromise the se-
curity of an entire system.

Revelations over the past year in-
dicate several providers have been
required to participate in large-scale
surveillance operations to aid law-
enforcement and counterintelligence
efforts. While such efforts concen-
trate largely on email and social-net-
work activity, the full extent of surveil-
lance remains largely unknown to the
public. It would be naïve to believe
providers with the ability to monitor
users’ virtual machines for sensitive
data (such as encryption keys) are not
required to do so; furthermore, they
are also unable to reveal such disclo-
sures to their customers.

Compliance standards also require
restricting internal access to customer
data while limiting the ability of a sin-
gle administrator to make significant
changes without appropriate checks
and balances.5 As the single toolstack
architecture bestows unfettered access

to all virtual machines on the adminis-
trators, it effectively hampers the abil-
ity of operators to provide the guaran-
tees required by their customers, who,
in turn, could opt for more private
hosting solutions despite the obvious
advantages of cloud hosting in terms
of scale and security.

Recognizing this danger, some sys-
tems advocate splitting the monolithic
administrative toolstack into several
mini toolstacks8,10 each capable of ad-
ministrating only a subset of the entire
system. By separating the provisioning
of resources from their administra-
tion, users would have a private tool-
stack to manage their virtual machines
to a much greater degree than with pre-
provisioned machines (see Figure 3).
As a user’s toolstack can interpose on
memory accesses from only the guests
assigned to it, users’ can encrypt the
content of their virtual machines if de-
sired. Correspondingly, platform ad-
ministrators no longer need rights to
access the memory of any guest on the
system, limiting their ability to snoop
sensitive data.

“Nested virtualization,” which al-
lows a hypervisor to host other hyper-
visors in addition to regular OSes, pro-
vides another way to enforce privacy
for tenants; Figure 4 outlines a small,
lightweight, security-centric hypervi-
sor hosting several private, per-tenant,
commodity hypervisors.39 Isolation,
security, and resource allocation are
separated from resource manage-
ment. Administrators at the cloud
provider manage the outer hypervisor,
allocating resources managed by the
inner hypervisors. The inner hypervi-
sors are administered by the clients
themselves, allowing them to encrypt
the memory and disks of their systems
without sacrificing functionality. Since
device management and emulation
are performed by the inner hypervisor,
the outer, provider-controlled, hyper-
visor never needs access to the memo-
ry of a tenant, thereby maintaining the
tenant’s confidentiality.

While both split toolstacks and
nested virtualization help preserve
confidentiality from rogue admin-
istrators, the cloud provider itself
remains a trusted entity in all cases.
After all, an operator with physical
access to the system could simply
extract confidential data and encryp-

contributed articles

78 COMMUNICATIONS OF THE ACM | MAY 2014 | VOL. 57 | NO. 5

cloud providers have a strong incentive
to bolster the confidence of their cus-
tomers, such transparency conflicts
with an operational desire to maintain
some degree of secrecy regarding the
software stack for competitive reasons.

At the End of a Long Day…
Friday, 21:17. Transmogrifica head-
quarters, Palo Alto…

Robin and Andrea are reflecting on
their long day in the boardroom.

“Well, we got green lights from secu-
rity and legal. Sam’s team just called to
say they’ve got it all tested and set up.
We should be starting any time now,”
says Andrea.

“The cloud scares me, Andrea. It’s
powerful, convenient, and deadly. Be-
fore we realize it, we’ll be relying on it,
without understanding all the risks.
And if that happens, it’ll be our necks.
But it’s been right for us this time
around. Good call.”

Epilogue. Transmogrifica complet-
ed the contract ahead of schedule with
generous bonuses for all involved. Cen-
tralized computation was a successful
experiment, and Transmogrifica today
specializes in tailoring customer work-
loads for faster cloud processing. The
150-node cluster server remains un-
purchased.

Conclusion
Cloud computing, built with powerful
servers at the center and thin clients at
the edge, is a throwback to the main-
frame model of computing. Seduced
by the convenience and efficiency of
such a model, users are turning to the
cloud in increasing numbers. Howev-
er, this popularity belies a real security
risk, one often poorly understood or
even dismissed by users.

Cloud providers have a strong in-
centive to engineer their systems to

tion keys from the DRAM of the host
or run a malicious hypervisor to do
the same. While the former is a dif-
ficult, if not impossible, problem to
solve, recent advances help allow us-
ers to gain some assurance about the
underlying system.

Trust and Attestation
Consider the following scenario com-
monly seen in fiction: Two characters
meet for the first time, with one, much
to the surprise of the other, seeming to
act out of character. Later, it becomes
apparent that a substitution has oc-
curred and the character is an impos-
ter. This illustrates a common prob-
lem in security, where a user is forced
to take the underlying system at its
word, with no way of guaranteeing it is
what it claims to be. This is particular-
ly important in cloud environments,
as the best security and auditing tech-
niques are worthless if the platform
disables them.

“Trusted boot” is a technology
that allows users to verify the identity
of the underlying platform that was
booted. While ensuring the loaded
virtualization platform is trusted, it
makes no further guarantees about
security; the system could have been
compromised after the boot and still
be verified successfully. Two tech-
niques that provide a trusted boot are
unified extensible firmware interface
(UEFI) and trusted platform module

(TPM). While differing in implemen-
tation, both rely on cryptographic
primitives to establish a root of trust
for the virtualization platform.

UEFI is a replacement for the BIOS
and is the first software to be executed
during the boot process. In a secure
boot, each component of the boot pro-
cess verifies the identity of the next
one by calculating its digest or hash
and comparing it against an expected
value. The initial component requires
a platform key in the firmware to attest
its identity. TPM differs slightly in its
execution; rather than verify the iden-
tity of each component while booting,
a chain with the digest of each compo-
nent is maintained in the TPM. Verifi-
cation is deferred for a later time. Cli-
ents can verify the entire boot chain of
a virtualization platform, comparing it
against a known-good value, a process
called “remote attestation.”

Trusted-boot techniques give users
a way to gain more concrete guaran-
tees about the virtualization platform
to which they are about to commit
sensitive data. By providing a way to
verify themselves, then allowing users
to proceed only if they are satisfied that
certain criteria are met, they help over-
come one of the key concerns in cloud
computing. However, this increased
security comes at a cost; to realize the
full benefit of attestation, the source
code, or at least the binaries, must be
made available to the attestor. While

Figure 3. Multiple independent toolstacks.

User A’s

Toolstack

User B’s

Toolstack

Control VM (Commodity OS)

Xen

Managed By

Qemu

Disk
Controller

Network
Device

Disk
Controller
Network
Device

User B’s VMUser A’s VM

Figure 4. Nested virtualization.

Hypervisor (Outer)

VM 1

User A’s

Hypervisor (Inner)

VM 2 VM 3

User B’s
Hypervisor (Inner)

contributed articles

MAY 2014 | VOL. 57 | NO. 5 | COMMUNICATIONS OF THE ACM 79

provide strong isolation and security
guarantees while maintaining perfor-
mance and features. Their business
case relies on getting this combina-
tion of trade-offs right. However, they
have an equally strong incentive to
keep their software and management
techniques proprietary (to safeguard
their competitive advantages) and
not report bugs or security incidents
(to maintain their reputations). While
hypervisors have been studied exten-
sively, and many security enhance-
ments have been proposed in the lit-
erature, the actual techniques used
or security incidents detected in com-
mercial deployments are generally
not shared publicly.

All this makes it difficult for users
to evaluate the suitability of a commer-
cial virtualization platform for an ap-
plication. Given the overall profitabil-
ity and growth of the business, many
clients have sufficient trust to run cer-
tain applications in the cloud. Equally
clear is that concern over security and
liability is holding back other clients
and applications; for example, Cana-
dian federal law restricts health care
and other sensitive data to machines
physically located in Canada, while
recent news of U.S.-government sur-
veillance programs has prompted in-
creased caution in adopting cloud ser-
vices, particularly in Europe.

As with most types of trust, trust in
a cloud provider by a client is based
on history, reputation, the back and
forth of an ongoing commercial rela-
tionship, and the legal and regulatory
setting, as much as it is on technical
details. In an effort to entice custom-
ers to move to the cloud, providers
already provide greater transparency
about their operations and proactively
attempt to the meet the compliance
standards required by the financial and
the health-care industries. Given the
competing demands of the cloud-infra-
structure business, this trend toward
transparency is likely to continue.

References
1. Abadi, M., Budiu, M., Erlingsson, U., and Ligatti, J.

Control-flow integrity. In Proceedings of the 12th
ACM Conference on Computers and Communications
Security. ACM Press, New York, 340–353.

2. Amazon. Summary of the Amazon EC2 and Amazon
RDS Service Disruption in the U.S. East Region; http://
aws.amazon.com/message/65648/

3. Amazon. Summary of the December 24, 2012 Amazon
ELB Service Event in the U.S. East Region; http://aws.
amazon.com/message/680587/

4. Amazon. Summary of the October 22, 2012 AWS

Service Event in the U.S. East Region; https://aws.
amazon.com/message/680342/

5. Amazon. AWS Risk and Compliance, 2014; http://
media.amazonwebservices.com/AWS_Risk_and_
Compliance_Whitepaper

6. Amazon. Overview of Security Processes, 2014; http://
media.amazonwebservices.com/pdf/AWS_Security_
Whitepaper

7. Bugnion, E., Devine, S., Govil, K., and Rosenblum,
M. Disco: Running commodity operating systems
on scalable multiprocessors. ACM Transactions on
Computer Systems 15, 4 (1997), 412–447.

8. Butt, S., Lagar-Cavilla, H.A., Srivastava, A., and
Ganapathy, V. Self-service cloud computing. In
Proceedings of the 2012 ACM Conference on
Computer and Communications Security. ACM Press,
New York, 2012, 253–264.

9. Clark, C., Fraser, K., Hand, S., Hansen, J.G., Jul, E.,
Limpach, C., Pratt, I., and Warfield, A. Live migration
of virtual machines. In Proceedings of the Second
USENIX Symposium on Networked Systems Design
and Implementation. USENIX Association, Berkeley,
CA, 2005, 273–286.

10. Colp, P., Nanavati, M., Zhu, J., Aiello, W., Coker, G.,
Deegan, T., Loscocco, P., and Warfield, A. Breaking
up is hard to do: Security and functionality in a
commodity hypervisor. In Proceedings of the 23rd
ACM Symposium on Operating Systems Principles.
ACM Press, New York, 189–202.

11. Dropbox. Where Does Dropbox Store Everyone’s Data?;
https://www.dropbox.com/help/7/en

12. Edbert, J. How Netflix operates clouds for maximum
freedom and agility. AWS Re:Invent, 2012; http://www.
youtube.com/watch?v=s0rCGFetdtM

13. Erlingsson, U., Abadi, M., Vrable, M., Budiu, M., and
Necula, G.C. XFI: Software guards for system address
spaces. In Proceedings of the Seventh Symposium
on Operating System Design and Implementation.
USENIX Association, Berkeley, CA, 2006, 75–88.

14. Fraser, K., Hand, S., Neugebauer, R., Pratt, I., Warfield,
A., and Williamson, M. Safe hardware access with the
Xen virtual machine monitor. In Proceedings of the
First Workshop on Operating System and Architectural
Support for the On-Demand IT Infrastructure, 2004.

15. Ganjali, A. and Lie, D. Auditing cloud management
using information flow tracking. In Proceedings of
the Seventh ACM Workshop on Scalable Trusted
Computing. ACM Press, New York, 2012, 79–84.

16. Goel, A., Po, K., Farhadi, K., Li, Z., and De Lara, E. The
Taser intrusion recovery system. In Proceedings of the
20th ACM Symposium on Operating Systems Principles.
ACM Press, New York, 2005, 163–176.

17. Hund, R., Holz, T., and Freiling, F.C. Return-oriented
rootkits: Bypassing kernel code integrity protection
mechanisms. In Proceedings of the 18th Conference on
USENIX Security. USENIX Association, Berkeley, CA,
2009, 383–398.

18. Keller, E., Szefer, J., Rexford, J., and Lee, R.B.
NoHype: Virtualized cloud infrastructure without
the virtualization. In Proceedings of the 37th Annual
International Symposium on Computer Architecture.
ACM Press, New York, 2010, 350–361.

19. Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock,
D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski,
R., Norrish, Sewell, T., Tuch, H., and Winwood, S. sel4:
Formal verification of an OS kernel. In Proceedings of the
ACM SIGOPS 22nd Symposium on Operating Systems
Principles. ACM Press, New York, 2009, 207–220.

20. Kortchinsky, K. Cloudburst: A VMware guest to host
escape story. Presented at Black Hat USA 2009;
http://www.blackhat.com/presentations/bh-usa-09/
KORTCHINSKY/BHUSA09-Kortchinsky-Cloudburst-
SLIDES

21. Kutch, P. PCI-SIG SR-IOV Primer: An Introduction
to SR-IOV Technology. Application Note 321211-002,
Intel Corp., Jan. 2011; http://www.intel.com/content/
dam/doc/application-note/pci-sig-sr-iov-primer-sr-iov-
technology-paper

22. Leinenbach, D. and Santen, T. Verifying the Microsoft
Hyper-V hypervisor with VCC. In Proceedings of the
Second World Congress on Formal Methods. Springer-
Verlag, Berlin, Heidelberg, 2009, 806–809.

23. Microsoft. Windows Server 2008 R2 Core: Introducing
SCONFIG; http://blogs.technet.com/b/virtualization/
archive/2009/07/07/windows-server-2008-r2-core-
introducing-sconfig.aspx

24. Murray, D.G., Milos, G., and Hand, S. Improving Xen
security through disaggregation. In Proceedings of
the Fourth ACM SIGPLAN/SIGOPS International
Conference on Virtual Execution Environments. ACM
Press, New York, 2008, 151–160.

25. Nguyen, A. Raj, H., Rayanchu, S., Saroiu, S., and
Wolman, A. Delusional boot: Securing hypervisors
without massive reengineering. In Proceedings of
the Seventh ACM European Conference on Computer
Systems. ACM Press, New York, 2012, 141–154.

26. Ristenpart, T., Tromer, E., Shacham, H., and Savage,
S. Hey, you, get off of my cloud: Exploring information
leakage in third-party compute clouds. In Proceedings
of the 16th ACM Conference on Computer and
Communications Security. ACM Press, New York,
2009, 199–212.

27. Rutkowska, J. and Wojtczuk, R. Preventing and
detecting Xen hypervisor subversions. Presented at
Black Hat USA 2008; http://www.invisiblethingslab.
com/resources/bh08/part2-full

28. Shacham, H. The geometry of innocent flesh on the
bone: Return into libc without function calls (on the
x86). In Proceedings of the 14th ACM Conference on
Computer and Communications Security. ACM Press,
New York, 552–561.

29. Steinberg, U. and Kauer, B. NOVA: A microhypervisor-
based secure virtualization architecture. In
Proceedings of the Fifth European Conference on
Computer Systems. ACM Press, New York, 2010,
209–222.

30. Thibault, S. and Deegan, T. Improving performance
by embedding HPC applications in lightweight Xen
domains. In Proceedings of the Second Workshop on
System-Level Virtualization for High-Performance
Computing. ACM Press, New York, 2008, 9–15.

31. University of New South Wales and NICTA. seL4.
http://www.ertos.nicta.com.au/research/sel4/

32. VMware. Benefits of Virtualization with VMware; http://
www.vmware.com/virtualization/virtualization-basics/
virtualization-benefits.html

33. VMware. VMware hypervisor: Smaller footprint for
better virtualization solutions; http://www.vmware.
com/virtualization/advantages/robust/architectures.
html

34. Wang, Z. and Jiang, X. Hypersafe: A lightweight
approach to provide lifetime hypervisor control-flow
integrity. In Proceedings of the 2010 IEEE Symposium
on Security and Privacy. IEEE Computer Society,
Washington, D.C., 2010, 380–395.

35. Wang, Z., Wu, C., Grace, M., and Jiang, X. Isolating
commodity hosted hypervisors with HyperLock. In
Proceedings of the Seventh ACM European Conference
on Computer Systems. ACM Press, New York, 2012,
127–140.

36. Wilkes, J., Mogul, J., and Suermondt, J. Utilification.
In Proceedings of the 11th ACM SIGOPS European
Workshop. ACM Press, New York, 2004.

37. Wojtczuk, R. A stitch in time saves nine: A case of
multiple OS vulnerability. Presented at Black Hat USA
2012; http://media.blackhat.com/bh-us-12/Briefings/
Wojtczuk/BH_US_12_Wojtczuk_A_Stitch_In_Time_
WP

38. Wojtczuk, R. and Rutkowska, J. Following the
White Rabbit: Software Attacks against Intel VT-d
Technology, 2011; http://www.invisiblethingslab.com/
resources/2011/SoftwareAttacksonIntelVT-d

39. Zhang, F., Chen, Chen, H., and Zang, B. Cloudvisor:
Retrofitting protection of virtual machines in multi-
tenant cloud with nested virtualization. In Proceedings
of the 23rd ACM Symposium on Operating Systems
Principles. ACM Press, New York, 2011, 203–216.

40. Zhang, Y., Juels, A., Reiter, M.K., and Ristenpart, T.
Cross-VM side channels and their use to extract
private keys. In Proceedings of the 2012 ACM
Conference on Computer and Communications
Security. ACM Press, New York, 2012, 305–316.

Mihir Nanavati (mihirn@cs.ubc.ca) is a Ph.D. student in
the Department of Computer Science at the University of
British Columbia, Vancouver.

Patrick Colp (pjcolp@cs.ubc.ca) is a Ph.D. student in the
Department of Computer Science at the University of
British Columbia, Vancouver.

William Aiello (aiello@cs.ubc.ca) is a professor in the
Department of Computer Science at the University of
British Columbia, Vancouver.

Andrew Warfield (andy@cs.ubc.ca) is an assistant
professor in the Department of Computer Science at the
University of British Columbia, Vancouver.

Copyright held by Author/Owner(s). Publication rights
licensed to ACM. $15.00

Copyright of Communications of the ACM is the property of Association for Computing
Machinery and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.

contributedarticles

124 c o m m u n i c at i o n s o f t h e a c m | M ay 2 0 0 9 | v o l . 5 2 | n o . 5

d o i : 1 0 . 1 1 4 5 / 1 5 0 6 4 0 9 . 1 5 0 6 4 3 9

by stuart d. Galup, ronald dattero, Jim J. Quan and

sue conGer

T h e J u ly 2 0 0 6 i s s u e o f T h e CommuniCations 2 was
dedicated to the topic of Services Science, a new
approach to viewing, developing, and deploying
Information and Communication Technologies (ICT).
The introduction, written by Jim Spohrer and Doug
Riecken, both of IBM Corporation, stated:

“To the majority of computer scientists, whether
in academia or industry, the term “services” is
associated with Web services and service-oriented
architectures. However, there is a broader story
to be told of the remarkable growth of the service
sector, which has come to dominate economic
activity in most advanced economies over the
last 50 years. … The opportunity to innovate in
services, to realize business and societal value
from knowledge about service, to research,
develop, and deliver new information services and
business services, has never been greater.”

In this article, we focus on the subtopic of Service
Science that deals with the management of ICT service
operations.

The economies of the industrialized nations have
transitioned, over the past 100 years, from agricultural
and manufacturing based to government and business

services (GBS) based. The GBS portions
of the industrialized nations’ econo-
mies exceed 75%. In the U.S., vast arrays
of GBS comprise nearly 80% of the coun-
try’s economic activity.8 As a result, the
USA Bureau of Labor Statistics projects
that employment growth will continue
to be concentrated in the GBS sectors of
the economy during the next decade.1

During the 1930s, the U.S. Depart-
ment of Commerce coined the term
“service,” using three economic sec-
tors to describe the economy: agri-
culture, manufacturing, and service.
Service, at the time, was a catchall for
all the activities that did not fit into
the other two categories. The term
“service” has no single definition and
ranges from a change in condition or
state of an entity caused by another to
a set of deeds, processes, and resulting
performances.1

2

Service Science builds on the term
“service” in the deed, process, perfor-
mance sense, by incorporating people,
processes, and technologies that in-
teract to deliver services. Chesbrough
and Spohrer2 suggest that there are
common elements across many differ-
ent types of services that might form
a foundation for a field of Service Sci-
ence. Types of services include, for in-
stance, interaction of supplier and cus-
tomer, the exploitation of ICT, change
management, and transparency.4, 6, 10, 1

1

Service Science blends many disci-
plines including computer science, op-
erations research, industrial engineer-
ing, business strategy, management
sciences, social and cognitive sciences,
and organizational theory.

it service management
Information Technology Service Man-
agement (ITSM) is a subset of Service
Science that focuses on IT operations
such as service delivery and service
support. In contrast to the traditional
technology-oriented approaches to IT,
ITSM is a discipline for managing IT
operations as a service that is process-
oriented and accounts for 60% – 90% of
total cost of IT ownership.4 Providers
of IT services can no longer afford to

an overview
of it service
management

M ay 2 0 0 9 | v o l . 5 2 | n o . 5 | c o m m u n i c at i o n s o f t h e a c m 125

contributed articles

publications include Service Strategy,
Service Design, Service Transition, Ser-
vice Operations, and Continual Service
Improvement.10

The National Standards Body of the
United Kingdom is the British Stan-
dards Institute, operating under a Royal
Charter since 1901 to act as the stan-
dards organization for the British Gov-
ernment. BS (British Standard) 15000,
ratified in 2000, was the world’s first
standard for ITSM.10 The standard spec-
ifies a set of interrelated management
processes, which form a framework
whereby processes and systems can be
established and evaluated. BS 15000 is
primarily IT operations-oriented and
primarily based upon the ITIL.

ISO/IEC 20000 is the next step af-
ter BS 15000 in the process of interna-
tional acceptance of a single set of best
practices. ISO/IEC 20000 is the first
international standard for ITSM and it
consists of two publications. ISO/IEC
20000-1 (Part 1) is the formal standard

focus on technology and their internal
organization, they now have to consid-
er the quality of the services they pro-
vide and focus on the relationship with
customers.11

Because ITSM is process-focused, it
shares a common theme with the pro-
cess improvement movement (such as,
TQM, Six Sigma, Business Process Man-
agement, and CMMI). ITSM provides a
framework to align IT operations-relat-
ed activities and the interactions of IT
technical personnel with business cus-
tomer and user processes.3 Figure 1 de-
picts the evolution of ITSM best practice
standards starting with the Informa-
tion Technology Infrastructure Library
(ITIL) and most recently the December
2005 International Organization for
Standards (ISO)/International Elec-
trotechnical Commission (IEC) 20000
standard, as well as the other standards
(such as, COBIT, etc.) that influenced
the creation of ISO/IEC 20000.

ITSM is often associated with the
British Government’s ITIL. The ITSM
subsection of ITIL concentrates on ser-
vice support and delivery in IT opera-
tions. Approximately 80% of the cost of
an infrastructure is in these two areas.4
Also, as many as 90%6 of USA compa-
nies have one or more ITSM implemen-
tations underway and, with the 2005
ratification of ISO/IEC 20000, other
companies are recognizing an oppor-
tunity to improve their organizations
in ways that may translate to improved
organizational competitiveness.6

The ITIL is a framework of best prac-
tices intended to facilitate the delivery
of high quality IT services at a justifi-
able cost. The ITIL is built around a pro-
cess-based systems perspective of con-
trolling and managing IT operations,
including continuous improvement
and metrics. The British Government’s
Central Computer and Telecommu-
nications Agency developed the ITIL
during the 1980s. This was in response
to its growing dependence on informa-
tion technology and an increasing need
for greater efficiency and effectiveness.
The British Government recognized
that without standard practices, gov-
ernment agencies and private sector
contractors were independently creat-
ing their own IT management prac-
tices and duplicating efforts. The ITIL
v3 (available May 2007) consists of five
publications and associated tools. The

and defines the ‘shall’ requirements
to delivering quality services. ISO/IEC
20000-2 (Part 2) is a code of practice
that describes ITSM best practices.
Figure 2 depicts the best practice pro-
cesses and their inter-relationships as
structured in ISO/IEC 20000.

the Global impact of itsm
The evolution of ITSM from the ITIL
framework to the BS 15000, and then to
the international standard ISO/IEC 20000,
reflects the changing global demands
placed on IT organizations to deliver IT
services. ITSM, as defined in the ITIL, is
both a glossary, to ensure a uniform vo-
cabulary, and a set of conceptual process-
es intended to outline IT best practices.
Establishing a set of uniform processes
(such as Incident Management, Change
Management, etc.) enables the delivery
of IT services consistently within a single
IT organization as well as across many IT
organizations (such as multi-nationals,
outsourcers, etc.).

figure 1: evolution of itsm (finden-brown & long, 2005)

AN OVERVIEW OF INFORMATION TECHNOLOGY SERVICE MANAGEMENT

1

Figure 1: Evolution of ITSM (Finden-Brown & Long, 2005)

figure 2: iso/iec 20000 (iso/iec 20000-1: part 1, 2005, p. 1)

AN OVERVIEW OF INFORMATION TECHNOLOGY SERVICE MANAGEMENT

2

Figure 2: ISO/IEC 20000 (ISO/IEC 20000-1: Part 1, 2005, p. 1)

contributed articles

126 c o m m u n i c at i o n s o f t h e a c m | M ay 2 0 0 9 | v o l . 5 2 | n o . 5

Businesses around the world are
adopting ITSM. As stated at Micro-
soft’s 2004 IT Forum Conference, “Re-
cent studies are showing that an IT
service organization could achieve up
to a 48 percent cost reduction by ap-
plying ITSM principles.” According to
Forrester, ITIL adoption by large com-
panies with revenue in excess 1 billion
dollars increased from 13% to 20%
during 2006. About 90%6 of USA com-
panies have one or more ITSM imple-
mentations underway. There are many
case examples that testify to the value
of incorporating a standardized ap-
proach to IT services. Three often-cited
cases in the trade publications, such as
InfoWorld and Computerworld, are:

In 2000, target response time for re- ˲
solving Web incidents at Caterpillar IT
was 30 minutes — but it hit that goal only
30 percent of the time. After Caterpillar
implemented ITIL, its IT providers hit
the benchmark more than 90 percent
of the time. In addition, Caterpillar has
been able to grow its business exponen-
tially in the past five years with only 1
percent increase in its IT budget.

Proctor & Gamble implemented ser- ˲
vice management processes outlined
by ITIL and saved $125 million, accord-
ing to company officials.

In March 2006, Affiliated Computer ˲
Services (ACS) became the first com-
pany in the USA to be ISO/IEC 20000

certified for standardizing and imple-
menting ITSM processes across six
data centers. In addition to re-certify-
ing the initial locations, six other data
centers were certified in 2007. While
there have been improvements in fi-
nancial aspects of IT operations man-
agement from these deployments, the
main benefits have been consistency of
outsourced customer handling across
data centers, improved quality of deliv-
ered service, and improved functional
visibility across data centers.

These businesses encourage and
support their employees in becoming
ITIL certified. EXIN, the Examination
Institute for Information Science, is
a global independent IT examination
provider that administers roughly 65%
of the ITIL examinations (e.g. Founda-
tion, IT Service Manager, etc.) world-
wide. Figure 3 shows the number of
ITIL examinations administered by
EXIN through 2006. The number of
examinations in 2006 was more than 5
times the number of examinations in
2003; the number of examinations in
2006 was more than 20 times the num-
ber of examinations in 1998!

Another indicator of the adoption of
ITSM is the growth of its professional
organization, the international IT Ser-
vice Management Forum (itSMF). The
itSMF works in partnership with a wide
range of governmental and standards

bodies to foster the development and
widespread use of ITSM practices, and
is a major influence on, and contribu-
tor to, industry best practices and stan-
dards worldwide (e.g., ISO/IEC 20000).
Membership in itSMF is diverse and in-
cludes large multi-nationals, small and
medium local enterprises, individual
consultants, and academics that span
both the public and private sectors.
As of December 2006, there were 40
country chapters, serving over 11,000
members.10 The U.S. country chapter
(itSMF-USA) has more than doubled
its membership from roughly 1,600 in
2004 to over 6,000 today and contains 42
local interest groups. Attendees at the
itSMF-USA conferences have increased
from 300 in 2001 (with 13 exhibitors) to
1,875 in 2005 (with 110 exhibitors).

ISO 9000 is a widely adopted qual-
ity management standard. According
to ISO, there are currently 800,000 ISO
9000 certified organizations world-
wide. In January 1996, there were ap-
proximately 8,500 registered sites in
the USA. Today, there are more than
50,000 ISO 9000 USA certified organi-
zations due in part to an IRS ruling in
January 2000 that many costs associ-
ated with ISO 9000 certification would
be tax deductible. In addition, the USA
Government already unofficially filters
IT service contracts, favoring those
with ISO certification. If we use ISO
9000 as a surrogate for the future adop-
tion of the ISO/IEC 20000 standard, the
forecast is for exponential growth in
the adoption of ISO/IEC 20000.

current initiatives
In the vendor community, several
frameworks use the ITIL as their foun-
dation: IBM’s Process Reference Mod-
el for IT (PRM-IT), Hewlett Packard’s
ITSM Reference Model, and Microsoft’s
Operating Framework (MOF). Each of
these frameworks provides their own
approach to the use and implementa-
tion of ITSM, supported by the propri-
etary software of the organization.

Several initiatives are underway to
transition ITSM into university pedago-
gy. The first is IBM’s Service Sciences,
Management, and Engineering (SSME)
initiative. The SSME initiative empha-
sizes undergraduate and graduate pro-
grams that focus on the development
and support of services. Over 70 univer-
sities from around the world have been

figure 3: eXin/exams for itil® previous years it service management foundation
(source: examination institute for information science, 2007)

M ay 2 0 0 9 | v o l . 5 2 | n o . 5 | c o m m u n i c at i o n s o f t h e a c m 127

contributed articles

involved with the SSME initiative. Re-
cently, IBM has joined with Oracle, the
Technology Professional Services Asso-
ciation (TPSA), the Service and Support
Professionals Association (SSPA), and
other IT companies and universities to
launch the Service Research and Inno-
vation (SRI) Initiative. The goals of this
initiative are to increase the amount of
money spent on service research and
development in the IT industry and
promote service science as an emerg-
ing academic discipline of study.7

The second initiative is promoted
through the itSMF. In October 2006,
itSMF-USA held its first academic fo-
rum in Dallas, Texas to promote the
development of ITSM academic pro-
grams. Forty universities from around
the country met to discuss curricula
and research opportunities.

A third initiative is also underway.
A group of faculty from 25 universi-
ties has petitioned the Association of
Information Systems (AIS) to form a
special interest group on services man-
agement. This special interest group
would facilitate new research streams
and develop services related academic
programs hoping that ITSM would be-
come a new curriculum area that may
bolster current sagging undergraduate
and graduate ICT enrollments.

conclusion
ICT plays a critical role in supporting
business functions and satisfying busi-
ness requirements. As all industries
and disciplines move toward a service
orientation, ITSM provides direction in
that move for IT operations. Industry
as a whole can apply ITSM best practic-
es to optimize IT services. The focus of
ITSM is to provide specific processes,
metrics, and guidance to enable and
manage assessment, planning, and im-
plementation of IT service processes to
optimize tactical and strategic IT asset
use. This research raises the awareness
of ITSM because of the obvious impor-
tance of this new emerging discipline.

ITSM is an emerging discipline focus-
ing on a set of well-established process-
es. These processes conform to stan-
dards such as ISO/IEC 20000 and best
practices such as ITIL. The goal of ITSM
is to optimize IT services in order to sat-
isfy business requirements and manage
the IT infrastructure while better align-
ing IT with organizational objectives.

REFERENCES
1. BLS Releases 2004-14 Employment Projections,

(http://www.bls.gov/news.release/ecopro.nr0.htm).
2. Chesbrough, H. and Spohrer, J. A research manifesto

for services science. Comm. of the ACM 49, 7, (July
2006), 35-40.

3. Finden-Brown, C. and Long, J. Introducing the IBM
Process Reference Model for IT: PRM-IT Sequencing
the DNA of IT Management. IBM Global Services,
July 2005.

4. Fleming, W. Using Cost of Service to Align IT.
Presentation at itSMF, Chicago, IL, 2005.

5. ISO/IEC 20000-1 Information Technology – Service
Management – Part 1: Specification, and Part 2: Code
of Practice, International Standards Organization,
Geneva, Switzerland, 2005.

6. Lynch, C. G. Most Companies Adopting ITIL®
Practices. CIO Magazine, Mar. 1, 2006.

7. Martens, C. IDG News Service. IBM, Oracle, others
create services consortium Vendors and university
researchers create organization to bring service
science to same level as computer science. InfoWorld.
(Mar. 28, 2007).

8. National Academy of Engineering. The Impact of
Academic Research on Industrial Performance. The
National Academies Press, Washington, DC, 2003.

9. Spohrer, J. and Riecken, D. Introduction. Comm. of the
ACM 49, 7, (July 2006) 30-32.

10. Taylor, Sharon. ITIL version 3. Presented at itSMF-
USA, Salt Lake City, UT, (Sept. 18-23, 2006).

11. van Bon, J. IT Service Management: An Introduction.
IT Service Management Forum. Van Haren Publishing,
UK, 2002.

12. Zeithaml, V. A. and Bitner, M. J. Service Marketing,
McGraw Hill, NY. 1996.

Acknowledgement: An earlier version of this article
was presented at the ACM SIGMIS Computer Personnel
Doctoral Consortium & Research Conference in St. Louis,
Missouri, U.S., April 19- 21, 2007.

Stuart D. Galup (sgalup@fau.edu) is an associate
professor for in the Department of Information
Technology and Operations Management at Florida
Atlantic University, FL.

Ronald Dattero (RonDattero@MissouriState.edu) is a
professor for the Department of Computer Information
Systems at Missouri State University, MO.

Jim J. Quan (jxquan@salisbury.edu) is an assistant
professor for the Department of Information and Decision
Sciences at Perdue School of Business, MD.

Sue Conger (sconger@gsm.udallas.edu) is an associate
professor at the University of Dallas, TX.

© 2009 ACM 0001-0782/09/0500 $5.00

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

1

A Categorisation of Cloud Computing Business Models

Victor Chang, David Bacigalupo, Gary Wills, David De Roure

School of Electronics and Computer Science, University of Southampton,

Southampton SO17 1BJ. United Kingdom

vic1e09@ecs.soton.ac.uk

Abstract – This paper reviews current cloud computing

business models and presents proposals on how

organisations can achieve sustainability by adopting

appropriate models. We classify cloud computing

business models into eight types: (1) Service Provider

and Service Orientation; (2) Support and Services

Contracts; (3) In-House Private Clouds; (4) All-In-One

Enterprise Cloud; (5) One-Stop Resources and

Services; (6) Government funding; (7) Venture

Capitals; and (8) Entertainment and Social

Networking. Using the Jericho Forum’s ‘Cloud Cube

Model’ (CCM), the paper presents a summary of the

eight business models. We discuss how the CCM fits

into each business model, and then based on this

discuss each business model’s strengths and

weaknesses. We hope adopting an appropriate cloud

computing business model will help organisations

investing in this technology to stand firm in the

economic downturn.

1. INTRODUCTION

Cloud Computing aims to provide scalable and

inexpensive on-demand computing infrastructures with

good quality of service (QoS) levels. More specifically,

this involves a set of network-enabled services that can

be accessed in a simple and pervasive way [10]. Cloud

Computing provides a compelling value proposition for

organisations to outsource their Information and

Communications Technology (ICT) infrastructures [6]. It

also provides added value for organisations; saving costs

in operations, resources and staff − as well as new

business opportunities for service-oriented models [2,

3,10]. In addition, it is likely cloud computing focusing

on operational savings and green technology will be at

the centre of attention. To avoid repeats of Internet

bubbles and to maintain business operations, achieving

long-term sustainability is an important success factor

for organisations [4]. In this paper we review current

cloud computing business models, and provide

recommendations on how organisations can achieve

sustainability by adopting appropriate models.

2. BUSINESS MODEL CLASSIFICATION

Extensive work has been done on investigating business

models empowered by Cloud technologies [9]. Despite

leading IT vendors such as Amazon, Microsoft, Google,

IBM and Salesforce taking the lead, the amount of

investment and spending is still more than the profits

received from these investments. This illustrates the

importance of classifying the right business strategies

and models for long-term sustainability. Based on

previously identified use cases, surveys, analysis and

reviews of cloud computing business models [1,4,5,8],

we categorise these models into eight types: (1) Service

Provider and Service Orientation; (2) Support and

Services Contracts; (3) In-House Private Clouds; (4) All-

In-One Enterprise Cloud; (5) One-Stop Resources and

Services; (6) Government funding; (7) Venture capitals

and (8) Entertainment and Social Networking.

3. THE CLOUD CUBE MODEL AND OUR

UPDATED DEFINITIONS

The Cloud Cube Model (CCM) proposed by The Jericho

Forum (JF) is used to enable secure collaboration in the

appropriate cloud formations best suited to the business

needs [7]. The JF points out that many cloud service

providers claim themselves to be able to deliver

solutions, so cloud customers need selecting the right

formation within CCM suiting their needs. Within

CCM, four distinct dimensions are identified. They are

(a) External and Internal; (b) Proprietary and Open; (c)

Perimeterised (Per) and De-Perimeterised (D-p), and (d)

In-sourced and Outsourced. Section 3.1 to 3.4 describes

how each component fits the business models. The

Diagram for CCM is in Figure 1 [7].

Figure 1: The Cloud Cube Model

3.1 Internal and External

This dimension describes the type of business model to

go for. Internal means private clouds and External means

public clouds.

3.2 Proprietary and Open

Proprietary means paid services or contractors. Open

stands for open source services or solutions. In the

context of cloud computing, sometimes open means a

system or platform that allows sharing and free accessing

of APIs, and in this respect, Google App Engine can be

considered as open.

2

3.3 Perimeterised (Per) and De-perimeterised (D-p)

The original definition refers to Per and d-p as an

architectural mindset – that is, whether traditional IT

perimeters such as network and firewall are operating

inside (Per) or outside (D-p) the organisation. In our

context different from JF, perimeterised means

infrastructure as a service (IaaS) and platform as a

service (PaaS), or any services, contracts and supports

using infrastructure and platform. De-perimeterised

stands for Software as a Service (SaaS), or any services,

contracts or supports for software/application, since they

are restricted by hardware boundary.

3.4 Insourced and Outsourced

Insourced means in-house development of clouds.

Outsourced refers to letting contractors or service

providers handle all requests, and most of cloud business

models fall into this.

4. HOW EACH BUSINESS MODEL FIT INTO THE

CCM

In this Section, how each business model fits into the

Cloud Cube Model is explained. Strengths and

weaknesses for each business model are also presented at

the end of each sub section.

4.1 Service Provider and Service Orientation

Most Service Providers offer public clouds, which

include infrastructure, platform and software as a

service. Service Providers require clients to outsource to

them. Therefore, this business model takes on all the

upper part of the Cloud Cube Model (CCM) in light

purple colour, shown in Figure 2.

Figure 2: CCM for Service Providers and Service Orientation

Strength Weakness

This is a main stream

business model, and

demands and requests are

guaranteed.

There are still unexploited

areas for offering services

and making profits.

Competitions can be very

stiff in all of

infrastructure, platform

and software as a service.

Data privacy is a concern

for some clients.

Service providers in Infrastructure as a Service (IaaS),

Platform as a Service (PaaS) and Software as a Service

(SaaS) all fall into this model.

4.2 Support and Service Contracts

Support and Service Contractors deal in proprietary

solutions for private domains, and they can cover

infrastructure, platform and software services. Therefore,

this model occupies the lower-left front and back of the

Cloud Cube Model coloured in the light purple shown in

Figure 3 below.

Figure 3: CCM for Support and Service Contracts

Strength Weakness

Suitable for small and

medium enterprises who

can make extra profits

and expand their levels

of services.

Some firms may

experience a period

without contracts, and

they must change their

strategies.

4.3 In-House Private Clouds

The In-House Private Cloud model deals with private

clouds, and does not seek outsourcing. This model can

work for Software as a Service. Early starters for such

projects currently focus on infrastructure and platform

levels. Therefore, the In-House Private Cloud model

takes the lower front quarter of CCM, coloured in light

blue colour, shown in Figure 4.

Figure 4: CCM for In-House Private Clouds

Strength Weakness

Best suited for

organisations developing

their own private clouds

which will not have data

security and data loss

concerns.

Projects can be

complicated and time

consuming.

3

4.4 All-In-One Enterprise Cloud

The All-In-One Enterprise Cloud model takes on all

parts of the CCM, and has the combined characteristics

of Service Provider and Orientation model and the ideal

In-House Private Clouds model. The only difference is

that there are areas overlapped with both outsourced and

in-house options, which is introduced as a dark purple

colour. Therefore, all parts of CCM are in light purple

colours except for internal clouds, which has joint

characteristic of outsourcing and in-house development

and is in dark purple colour as shown in Figure 5 below.

Figure 5: CCM for All-In-One Enterprise Cloud

Strength Weakness

Can be the ultimate

business model for big

players

Consolidating different

business activities and

strategies, including an

ecosystem approach or

comprehensive SaaS.

Small and medium

enterprises are likely not to

be suitable for this, unless

they join part of an

ecosystem.

4.5 One-Stop Resources and Services

The One-Stop Resources and Services model has the

same characteristics as Service Provider and Orientation

model, except this model often needs combined effort

from both outsourced and in-housed effort. Currently

proprietary vendors are taking a lead compared to

academic community clouds. Even if a community cloud

exists, it must be on a public domain for restricted users

only, and in that respect, they are in external rather than

internal cloud. This model takes on upper half of CCM

in dark purple as shown in Figure 6.

Figure 6: CCM for One-Stop Resources and Services

Strength Weakness

A suitable model for

business partnership and

academic community.

Can get mutual benefits

through collaboration.

All participating

organisations or members

should contribute. If not

managed well, it may end

up in other business

models or a community

breaking apart.

4.6 Government Funding

Government funds are available for both academic

institutions and corporate firms. However, the funding

purpose and research directions for both groups are often

not the same. If government is funding private sectors, it

is considered as outsourcing, and is taking left-half of the

CCM model in light purple. When the government is

funding academic institutions, which requires a period

for internal research and development (R&D) work, thus

they take on right half of the CCM in light blue.

Government then looks at two sides of research

outcomes, and would like to find a joint solution, or

hybrid recommendation, and therefore both solutions

overlap in the middle with dark purple colour as shown

in Figure 7.

Figure 7: CCM for Government Funding

Strength Weakness

Government can invest a

massive amount, and

this is beneficial for

projects requiring

extensive R&D,

resources and highly

trained staff.

Only affluent governments

can afford that, and also

top-class firms and

universities tend to be

selected.

4.7 Venture Capital

Venture capital has a similar approach as Government

funding, except the open, de-perimeterised and external

cloud within CCM is not just an in-housed approach but

an integrated approach. This is because investors tend to

think if a successful cloud project is not only relevant to

their invested firms, but also if it is appealing to a wider

group of users – with examples such as Ubuntu and

Parascale. Hence, there are more overlapped areas than

government funding model, including the right upper

quarter of CCM. These external clouds can be

outsourced (Ubuntu and Amazon EC2; or Ubuntu

support/services) or in-housed (users can opt for Ubuntu

4

Private clouds). The remaining area in the right lower

quarter is in light blue due to in-house research and

development. Figure 8 below is the best representation

for Venture capital model.

Figure 8: CCM for Venture capital

Strength Weakness

Can receive a surplus that

is essential for

sustainability. Useful for

start-ups, or organisations

nearly running out of cash.

It can be a prolonged

process without a

guarantee to get

anything.

4.8 Entertainment and Social Networking

Currently Entertainment and Social Networking focus on

Software as a Service, and are typically proprietary and

outsourced solutions. Therefore, it only occupies one

cube (in light purple) within the Cloud Cube Model.

Despite this, this model has the largest number of users,

which boosts its services, advertising and peripheral

product sales. Profits/investment attracted by Apple,

Facebook and Shanda Games are very large given the

age of these companies. See Figure 9 below.

Figure 9: CCM for Entertainment and Social Networking

Strength Weakness

If successful, this model

tends to dash into a

storm of popularity and

money in a short time.

Potential social problems.

Teenagers can indulge in

social networking and

excessive gaming, not

attending school and bad

social behaviour in a few

extreme cases.

5. CONCLUSION

Cloud computing business models are a relatively new

area, and finding the right business models can enhance

organisational sustainability. In this paper, we classify

cloud computing business models into eight types. We

discuss how the Cloud Cube Model (CCM) fits into each

business model. Based on this we discuss the strengths

and weaknesses of each business model. By adopting the

right business model, we hope organisations can stand

firm in economic downturns and expand their

businesses.

Future work includes publishing details of our proposed

Financial Cloud Framework (FCF). This extends our

business models and CCM with a focus on the healthcare

and financial domains, and includes financial modelling

in forecasting, modelling, simulations and benchmarking

of financial assets. An objective for FCF is to simplify

business models and processes. Currently a small

number of organisations have either adopted or are

considering using our cloud computing business models

and the FCF. These include an anonymous NHS entity in

London and an anonymous University working together

for private clouds, and the UK National Grid Service and

the OMII-UK for community and hybrid clouds. We will

propose another new business model, the Hexagon

Model, and will explain how it can complement with the

CCM with more case studies and modelling presented.

REFERENCES

[1] Armbrust M et al, “Above the Clouds: A Berkeley View of Cloud

Computing”, UC Berkeley Reliable Adaptive Distributed Systems

Laboratory Technical Report, February 2009.

[2] Boss G et al, “Cloud Computing”, IBM white paper, Version 1.0,

October 2007.

[3] Chang V., et al “Cancer Cloud Computing – Towards an Integrated

Technology Platform for Breast Cancer Research”, NHS Technical
Paper, July 2009.

[4] Chang V, Mills, H. and Newhouse, S. (2007) “From Open Source

to long-term sustainability: Review of Business Models and Case

studies”. UK e-Science All Hands Meeting, Nottingham, UK,

September 2007

[5] Chang V, “The Financial Cloud Computing”, nine-month thesis

technical report, University of Southampton School of Electronics and
Computer Science, February 2010.

[6] Haynie M, “Enterprise cloud services: Deriving business value
from Cloud Computing,” Micro Focus, Technical Report, 2009.

[7] Jericho Forum, “Cloud Cube Model: Selecting Cloud Formations
for Secure Collaboration Version 1.0”, Jericho Forum Specification,

April 2009.

[8] Lazonick W, “Evolution of the New Economy Business Model”,

UMass Lowell and INSEAD, 2005.

[9] Lohr S, “Google and I.B.M. Join in ‘Cloud Computing’ Research”,

New York Times, October 2007

[10] Wang L, Kunze M et al, “Cloud Computing: a Perspective Study”,

Grid Computing Environments Workshop (GCE’08), Austin, Texas,

December 2008

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP