Abstract—Cloud Data Centers use virtualization in order to

Abstract—Cloud
Data Centers use virtualization in order to reduce the cost and by
virtualization we can achieve more efficient resources. VM migration can also
accomplish more different resources like load balancing. In the area of IT VM
migration is one of the usage tools to migrate OS over the physical machine.
Virtual Machine migration is important while the demand of users to the cloud
service is increasing. Virtual Machine VM can impress the performance of the
application and Cloud Data Centers can manipulate server consolidation for
optimization techniques. This paper discuss the several VM migration techniques

 

I.  
INTRODUCTION

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

In Cloud
Computing VMs emigrate over Cloud Data Cen-ters. Cloud Data Center offers many
applications to manage resources for optimization. Virtualization uses VM
migration for replacing VMs over Cloud Data Centers to accomplish more
different resources for providing maintenance and load balancing. VM migration
techniques emigrate VMs through LAN or WAN links. But, VMs migrate across WAN
links uses server consolidation frameworks for stopping needless servers.VM
migration methods for server consolidation exam-ined to derive critical
parameters and improve best techniques of VM migration.This paper is structured
as follows. Section 2 Cloud Computing, VM migration, DVFS technology, and
server consolidation method. Section 3 presents a taxonomy for the
classification of server consolidation frameworks, existing server
consolidation frameworks, and comparisons of existing frameworks based on
parameters selected from literature. Section 4 presents a thematic taxonomy on
band-width optimiza-tion schemes, and discusses state-of-the-art bandwidth,
storage, and DVFS-enabled power optimization, followed by a detailed discussion
on comparisons of existing schemes. Section 5 briefly discusses the research
issues and trends in the VM migration domain. Section 6 concludes the paper

 

 

II.   
BACKGROUND

 

This section discusses cloud computing, virtual machine migration,
server consolidation, and DVFS enabled VM mi-gration process

 

 

A. Cloud Computing

 

Cloud computing
help users and provide access to a range of data and software services to
manage their work. Cloud com-puting is a computing model that allows better
management higher utilization and reduced operating costs for datacenter
operators while providing on-demand resource provisioning for multiple
customers. Users rent virtual resources and pay for only what they use. The
amount of application programming running in computer and number of users
connected will create workload to the cloud.

 

B. Virtual Machine
migration

 

Virtual machines in the cloud should emigrate from one machine to
another. Migration is related to Cicada. Cicada’s architecture inserts VM and
updating them. When VM place on servers the dataset should be there for all VM
through distinct storage like Amazon. Migrating a virtual machine needs the
migrating of both virtual memory and local data. Migration can be live but it
affects few seconds of down time to VM.

 

C. Consolidation Server

 

applications
couldn’t run on the same operating systems. By arranging Virtual server for
consolidating those applications on one physical server. Virtual server let
users to install many operating systems on the same server. You will run
already mismatched applications one by one, each one isolated from the others.
The consolidation of many distinct servers on many virtual machines which runs
on one physical server.

 

D. DVFS enabled VM
migration process

 

Today all
processors combine Dynamic Voltage and Fre-quency Scaling (DVFS) to CPU as
frequency at runtime. DVFS and Consolidation permits VMs to be migrated between
hosts. It depends on the CPU load over the different hosts and to switch
unwanted machines off. A consolidation system should put all the VMs on a
reduced set of machines which should have a high CPU load and DVFS might be
useless. The importantance of consolidation system is memory. All Virtual
Machines needs physical memory because limits the number of VMs that can run on
a host. So that, if consolidation can reduce the number of dynamic machines in
a hosting center,

but it couldn’t assurance
all usage of CPU on active machines while it is memory bound. The actual usage
of DVFS gets the advantage of reducing power consumption by lowering the processor
frequency. However, most of the computing infras-tructures depend on multi-core
and high frequency processors.

 

III.   
SERVER CONSOLIDATION

 

This section
shows the taxonomy on server consolidation frameworks, a review of server
consolidation frameworks, and a comparison of existing frameworks based on
parameters selected from literature

 

A. Taxonomy of server
consolidation frameworks

 

This section
presents the taxonomy for the classifying of the server consolidation
frameworks. Server consolidation frameworks are divided based on five common
characteristics between server consolidation frameworks which including
Resource assignment policy, Architecture, Colocation criteria, Migration
triggering point, and Migration model 2. Resource assignment policy attributes
are either static or dynamic. The static server consolidation method
pre-assigns maximum resources to the VM upon its creation. The architecture
parameter attribute describes server consolidation framework design. But, those
centralized server consolidation frameworks are disposed to single failure
point and which are unreliable. The co-location criteria attribute defines the
criterion opted to co-host multiple VMs within a server. VM co-location
criteria can be defined in terms of shared memory, communication bandwidth
between VMs, power efficiency, and sufficient resource availability to decide
on the appropriate time to migrate a VM. A migration model describes the
migration pattern chosen to emigrate the VM between servers. During server
consolidation, VMs are migrated either using pre-copy migration pattern or post
copy method.

 

B. A review of server
consolidation frameworks

 

VM depends on
communication cost in order to improve the performance of I/O and non-I/O
applications. The Com-munication cost is a cause of communication rate and end
to end network delay. The representation of communication cost between
different to identify intensive VMs in order to form a VM cluster. The cost
tree representing the communication cost between VMs serves in order to place
VMs according to the communication distance between VMs when crossed

 

.Unwanted VM migration
destroys by to decrease the SLA vi-olation. The framework lacks in considering
the effect of CPU and memory workloads during VM placement. According to the
resource memory workloads damage system performance

 

C. Comparison of
serverconsolidation frameworks

 

Many VM
migration approaches have optimized application downtime and total migration
duration by employing optimiza-tion and avoiding aggressive migration
termi-nation. More-over, an optimization method presents additional overhead on
shared resources like CPU, memory, or cache while optimizing VM migration
performance parameters such as downtime,

total migra-tion time, and
application QoS. Illustrates a quali-tative comparison of VM migration schemes
based on selected parameters to highlight commonalities and variances in
exist-ing bandwidth optimization schemes. Migration optimization exploits
deduplication , compression , fingerprinting , and dynamic self-ballooning to
improve application and network performance. So, VM migration approaches can
use optimized network bandwidth.

 

IV. VIRTUAL MACHIN EMIGRATION OPTIMIZATION

 

This section
presents and compares VM migration op-timization schemes that consider
bandwidth, DVFS-enabled power, and storage optimization to reduce the side
effects of VM migration process. VM migration through LAN abuses network
attached storage (NAS) architecture to share the storage between communicating
servers. But, migrating a VM across WAN boundaries requires migrating large
sized storage in addition to VM memory over intermittent links

 

A. Bandwidth optimization

 

This section discusses effectively using of limited network capacity to
enhance application performance during the VM migration process. It also shows
a thematic taxonomy evalua-tion of existing schemes and comparisons between
bandwidth optimization schemes.

 

1)   Taxonomy
of bandwidth optimization schemes: Different bandwidth optimization live VM
migration schemes result in varying application downtime and total migration
time based on the nature of workload hosted within the migrant VM, type of
network link, number of con-current migrant VMs, and type of hypervisor
selected to manage server resources. The proposed scheme applies binary
XOR-based RLE (XBRLE) delta compression to improve VM migration performance.
Prior to triggering migration, a guest kernel conveys soft page addresses to
the VMM. For further improvement, the delta page is compressed using a light
weight compression algorithm.

2)   Review
of bandwidth optimization schemes: An opti-mized post-copy VM migration scheme
was proposed that exploits on-demand paging, active push, pre-paging, and
dynamic self-ballooning optimizations to pre-fetch memory pages at the receiver
host. Besides, growing bubbles around the pivot memory page to transfer
neighboring memory pages does not always improve VM migration performance,
especially when write-intensive applications are hosted within migrated VMs.
Active push transfers memory pages to the target server and ensures that every
page is sent exactly once from the source server. This scheme progresses by
transferring CPU registers and device states to the receiver host prior to VM
memory content migration.

 

3)   Comparison
of bandwidth optimization schemes: Many VM migration approaches have optimized
application down-time and total migration duration by employing optimization
and avoiding aggressive migration termi-nation a case of pre-copy. Moreover, an
optimization method presents additional overhead on shared resources like CPU,
memory, or cache

while optimizing VM
migration performance parameters such as downtime, total migra-tion time, and
application QoS. illustrates a qualitative comparison of VM migration schemes
based on selected parameters to highlight commonalities and variances in
existing bandwidth optimization schemes. Live VM migration schemes follow
either pre-copy post-copy, or hybrid migration patterns to migrate VMs across
servers.

 

B. DVFS-enabled power
optimization

 

VM migration
helps reduce power consumption budget by migrating VMs. But, power consumption
within a server during VM migration overcomes the limited support which offered
by CPU architecture for DVFS application. The pro-posed approach considers VM
CAP value to decrease power consumption. In order to handle the proposed scheme
that has reduced processor clock rate to power consumption within a certain
limit. DVFS technology makes use of the relation of voltage, frequency, and
processor speed to adjust CPU clock rate 3. A power capping based VM
migration scheme was discussed in that prioritizes the VM migration. PMapper is
a power-aware application placement framework that considers power usage and
migration cost while deciding on application placement within a DC. Moreover,
during VM migration, the power manager adaptively applies DVFS to balance power
efficiency and SLA guarantee. The PMapper architecture is based on three
modules, namely performance manager, power manager, and monitoring engine. For
optimal VM placement while considering power efficiency and application SLA,
PMapper uses bin packing heuristics to map VMs on a suitable server.
Furthermore, the monitoring engine module gathers server/VM resource usage and
power state statistics before forwarding them to the power and performance.
Furthermore, it sorts the servers based on resource usage and power consumption
to choose the most suitable server based on resource availability and power
consumption estimates to host the workload. It also identifies underutilized
servers according to resource usage statistics and emigrates the load to other
servers to shut down servers for power efficiency. It allocates workload based
on minimizing energy consumption policy. A scheduling algorithm was proposed to
utilize DVFS methods to limit the power consumption budget within a DC. The
proposed scheduler dynamically checks application processing demands and
optimizes energy consumption using DVFS. Based on adaptive DVFS-enabled power
efficiency controller, hierarchical controller for power capping, integrate
power efficiency with power capping. The control system architecture design
consists of an efficiency controller, server capper, and group capper. The
efficiency controller is responsible for tracking the demands of individual
servers, But the server capper throttles power consumption according to
feedback. In addition to power distribution unfairness, the proposed scheme
assumes the server group configuration and power supply structure are flat.
However, they are actually hierarchical of the group capper throttles power
consumption at the server group level.

C. Storage optimization

 

The proposed
model consists of two components, target server and proxy server connected to
source and destination servers through a network block device connection.
When-ever the destination storage is completely synchronized with the source,
the connection is demolished to release source server resources. Prototype
implementation of I/O blocked live storage migration rapidly relocates disk
blocks within WAN links with minimum impact on I/O performance. The on demand
method fetches memory blocks from the source when they are not available at the
destination server. However, storage sharing between sender and target servers
at distant locations over the Internet. The experiments revealed that I/O
performance improved significantly compared to conventional remote storage
migration methods in terms of total migration time and cache hit ratio.
Therefore, to efficiently utilize band-width capacity, the background copy
method is improved with compression using. Storage migration schemes
comparison. Introducing compression enhances network performance in terms of
bandwidth utilization. LZO algorithm to reduce total transferred data for
storage synchronization and migration time. In case of connection failure
during storage migration, the hosted application’s performance significantly
degrades and the system may crash. The limited WAN bandwidth degrades the live
storage migration process. Bitmap based storage migration scheme has employed
simple hash algorithm such as SHA-1 to create and transfer a list of storage
blocks called sent bitmapto the destination server. However, in order to
migrate back VMs after server maintenance, an intelligent incremental migration
(IM) approach is proposed that only transfers blocks that are updated after
migration from the source to reduce migration time and total migration data.
Syn-chronous replication is costly as it affects running applications, network,
and system resources a cooperative, context-aware migration approach was
proposed, which enables the migration management system to arrange DC migration
across server platforms.

 

V.  CONCLUSION

 

In this paper,
the notions of cloud computing, VM mi-gration, storage migration, server
consolidation, and dynamic voltage fre-quency scaling based power optimization
are dis-cussed. The large size of VM memory, unpredictable workload nature,
limited bandwidth capacity, restricted resource sharing, inability to
accurately predict application demands, and ag-gressive migra-tion decisions,
call for dynamic, lightweight, adaptive, and optimal VM migration designs in
order to improve application perfor-mance. Furthermore, the inclusion of
heterogeneous, dedicated, and fast communication links for storage and VM
memory transferring can augment the application performance by reducing total
migration time and application service downtime. Several server consolidation
frameworks colocate.The VM memory size, unpredictable workload nature, limited
bandwidth capacity, restricted re-source sharing, inability to accurately
predict application de-mands, and aggressive migration decisions, call for
dynamic,

lightweight,
adaptive, and optimal VM migration designs in order to improve application
performance. Furthermore, the inclusion of heterogeneous dedicated, and fast
communication links for storage and VM memory transferring can augment the
application performance by reducing total migration time and application
service downtime. The lightweight VM migration design can reduce the overall
development efforts, augments the application performance, and can speed-up the
processing in CDC. Furthermore, the incorporation of dynamic workload behavior.

 

ACKNOWLEDGMENT

 

I thank Dr. Tara Yehia lecturer of Cloud
Computing in University of Kurdistan-Hewler for providing all necessary
information related to cloud computing.

 

BACK TO TOP
x

Hi!
I'm Angelica!

Would you like to get a custom essay? How about receiving a customized one?

Check it out