This document discusses client-side load balancing in a cloud computing environment. It describes how a client-side load balancer can distribute requests across backend web servers in a scalable way without requiring control of the infrastructure. The proposed architecture uses static anchor pages hosted on Amazon S3 that contain JavaScript code to select a web server based on its reported load. The JavaScript then proxies the request to that server and updates the page content. This approach achieves high scalability and adaptiveness without hardware load balancers or layer 2 optimizations.
Load Balancing In Cloud Computing newpptUtshab Saha
油
The document discusses various load balancing algorithms for cloud computing including round robin, first come first serve (FCFS), and simulated annealing. It provides implementations of each algorithm in CloudSim and compares the results. Round robin and FCFS showed similar overall response times, data center processing times, and maximum/minimum values. Simulated annealing had slightly lower average overall response time. The document proposes using a genetic algorithm for host-side optimization to select the best host for virtual machine requests.
slides are about load balancing as a concept and implementation of load balancing on computer technical level
slides show the server load balancing
different architectures , algorithms and examples
Cloud load balancing distributes workloads and network traffic across computing resources in a cloud environment to improve performance and availability. It routes incoming traffic to multiple servers or other resources while balancing the load. Load balancing in the cloud is typically software-based and offers benefits like scalability, reliability, reduced costs, and flexibility compared to traditional hardware-based load balancing. Common cloud providers like AWS, Google Cloud, and Microsoft Azure offer multiple load balancing options that vary based on needs and network layers.
This document discusses load balancing, which is a technique for distributing work across multiple computing resources like CPUs, disk drives, and network links. The goals of load balancing are to maximize resource utilization, throughput, and response time while avoiding overloads and crashes. Static load balancing involves preset mappings, while dynamic load balancing distributes workload in real-time. Common load balancing algorithms are round robin, least connections, and response time-based. Server load balancing distributes client requests to multiple backend servers and can operate in centralized or distributed architectures using network address translation or direct routing.
A distributed file system allows files to be stored on multiple computers that are connected over a network. It implements a common file system that can be accessed by all computers. Key goals are network transparency, so users can access files without knowing their location, and high availability, so files can always be easily accessed regardless of physical location. The main components are a name server that maps file names to locations, and cache managers that store copied of remote files locally to improve performance. Mechanisms like mounting, caching, bulk data transfer, and encryption help build robust distributed file systems.
This document discusses load balancing in cloud computing. It begins by defining cloud computing and some of its key characteristics like broad network access, rapid elasticity, and pay-as-you-go pricing. It then discusses how load balancing can improve performance in distributed cloud environments by redistributing load, improving response times, and better utilizing resources. The document outlines different load balancing techniques like virtual machine migration and throttled load balancing using a load balancer, virtual machines, and a data center controller. It also proposes a trust and reliability based algorithm that prioritizes data centers for load balancing based on calculated trust values that consider factors like initialization time, machine performance, and fault rates.
Virtualization is a technique, which allows to share single physical instance of an application or resource among multiple organizations or tenants (customers)..
Virtualization is a proved technology that makes it possible to run multiple operating system and applications on the same server at same time.
Virtualization is the process of creating a logical(virtual) version of a server operating system, a storage device, or network services.
The technology that work behind virtualization is known as a virtual machine monitor(VM), or virtual manager which separates compute environments from the actual physical infrastructure.
Security in Clouds: Cloud security challenges Software as a
Service Security, Common Standards: The Open Cloud Consortium The Distributed management Task Force Standards for application Developers Standards for Messaging Standards for Security, End user access to cloud computing, Mobile Internet devices and the cloud. Hadoop MapReduce Virtual Box Google App Engine Programming Environment for Google App Engine.
Load balancing is used to distribute workloads across multiple servers in cloud computing. It aims to optimize resource use and minimize response time. The document proposes using a round robin approach to distribute loads from virtual machines across servers periodically to reduce server workload and use networks efficiently. Key benefits outlined are high scalability, availability, and flexibility to balance various protocols and route traffic based on server health. The conclusion states that load balancing is important in cloud computing to distribute work evenly for high user satisfaction and resource utilization, though further research is still needed.
Scheduling refers to allocating computing resources like processor time and memory to processes. In cloud computing, scheduling maps jobs to virtual machines. There are two levels of scheduling - at the host level to distribute VMs, and at the VM level to distribute tasks. Common scheduling algorithms include first-come first-served (FCFS), shortest job first (SJF), round robin, and max-min. FCFS prioritizes older jobs but has high wait times. SJF prioritizes shorter jobs but can starve longer ones. Max-min prioritizes longer jobs to optimize resource use. The choice depends on goals like throughput, latency, and fairness.
This document provides an overview of client-side and server-side scripting languages. It defines scripting languages as programming languages that support writing scripts to create dynamic web pages. Client-side scripting includes JavaScript and happens in the user's browser, while server-side scripting includes PHP and ASP and occurs on the web server. The document compares advantages of each like speed and capabilities, and notes that many sites use both for different purposes like interactivity versus data storage.
This presentation provides an overview of cloud computing, including:
1. Cloud computing allows on-demand access to computing resources like servers, storage, databases, networking, software, analytics and more over the internet.
2. Key features of cloud computing include scalability, availability, agility, cost-effectiveness, and device/location independence.
3. Popular cloud storage services include Google Drive, Dropbox, and Apple iCloud which offer free basic storage with options to pay for additional storage.
This document proposes a load balancing model for public clouds using cloud partitioning. It divides a large public cloud into partitions based on geographic location. When a job arrives, a main controller assigns it to the least loaded partition. Each partition uses algorithms like weighted round robin to further distribute jobs to nodes based on their calculated load degrees. The model aims to improve resource utilization and response times across the large, complex public cloud infrastructure.
The document provides an introduction to basic web architecture, including HTML, URIs, HTTP, cookies, database-driven websites, AJAX, web services, XML, and JSON. It discusses how the web is a two-tiered architecture with a web browser displaying information from a web server. Key components like HTTP requests and responses are outlined. Extension of web architecture with server-side processing using languages like PHP and client-side processing with JavaScript are also summarized.
Unit 3 -Data storage and cloud computingMonishaNehkal
油
Data storage
Cloud storage
Cloud storage from LANs to WANs
Cloud computing services
Cloud computing at work
File system
Data management
Management services
This document discusses distributed databases and distributed database management systems (DDBMS). It defines a distributed database as a logically interrelated collection of shared data physically distributed over a computer network. A DDBMS is software that manages the distributed database and makes the distribution transparent to users. The document outlines key concepts of distributed databases including data fragmentation, allocation, and replication across multiple database sites connected by a network. It also discusses reference architectures, components, design considerations, and types of transparency provided by DDBMS.
The document discusses several security challenges related to cloud computing. It covers topics like data breaches, misconfiguration issues, lack of cloud security strategy, insufficient identity and access management, account hijacking, insider threats, and insecure application programming interfaces. The document emphasizes that securing customer data and applications is critical for cloud service providers to maintain trust and meet compliance requirements.
Virtualization security for the cloud computing technologyDeep Ranjan Deb
油
This document outlines a seminar presentation on virtualization security for cloud computing. It begins with an introduction noting the widespread adoption of virtualization in data centers. The literature survey section summarizes several papers on virtualization security risks and solutions. The presentation defines virtualization and cloud enabling technologies. It discusses why virtualization is important, the architecture of virtualization, and applications of virtualization. The presentation also covers virtualization risks, threats, security approaches, and the issue of virtual machine sprawl. It outlines benefits of virtualization and concludes with future research topics and references.
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
Optimistic concurrency control in Distributed Systemsmridul mishra
油
This document discusses optimistic concurrency control, which is a concurrency control method that assumes transactions can frequently complete without interfering with each other. It operates by allowing transactions to access data without locking and validating for conflicts before committing. The validation checks if other transactions have read or written the same data. If a conflict is found, the transaction rolls back and restarts. The document outlines the basic algorithm, phases of transactions (read, validation, write), and advantages like low read wait time and easy recovery from deadlocks and disadvantages like potential for starvation and wasted resources if long transactions abort.
This document summarizes distributed computing. It discusses the history and origins of distributed computing in the 1960s with concurrent processes communicating through message passing. It describes how distributed computing works by splitting a program into parts that run simultaneously on multiple networked computers. Examples of distributed systems include telecommunication networks, network applications, real-time process control systems, and parallel scientific computing. The advantages of distributed computing include economics, speed, reliability, and scalability while the disadvantages include complexity and network problems.
Virtualization allows multiple operating systems and applications to run on a single server at the same time, improving hardware utilization and flexibility. It reduces costs by consolidating servers and enabling more efficient use of resources. Key benefits of VMware virtualization include easier manageability, fault isolation, reduced costs, and the ability to separate applications.
One can Study the key concept of Virtualization, its types, why Virtualization and what are the use cases and Benefits of Virtualization and example of Virtualization.
Cloud computing allows users to access virtual hardware, software, platforms, and services on an as-needed basis without large upfront costs or commitments. This transforms computing into a utility that can be easily provisioned and composed. The long-term vision is for an open global marketplace where IT services are freely traded like utilities, lowering barriers and allowing flexible access to resources and software for all users.
This document summarizes a dissertation on an improved load balancing technique for secure data in cloud computing. The dissertation discusses research issues in load balancing and data security in cloud computing. It proposes a load balancing methodology that uses a load balancer, Kerberos authentication, and Nginx load balancing algorithms like round robin and least connections to securely store and balance load of encrypted data across multiple cloud nodes. The methodology is implemented using tools like HP LoadRunner, Amazon Web Services, and Jelastic cloud platform. Performance is analyzed in terms of transaction time. The proposed technique aims to improve resource utilization, access control, data security, and efficiency in cloud environments.
Improve Customer Experience with Multi CDN SolutionCloudxchange.io
油
1) Intelligently balancing content delivery among multiple clouds and CDNs using Cedexis' technology can help approach 100% availability by routing around outages.
2) Cedexis' real-user monitoring data and intelligent routing capabilities allow enterprises to control traffic across multiple CDNs and clouds to improve performance and reduce costs.
3) Cedexis helps customers implement hybrid CDN strategies using their own infrastructure like data centers combined with multiple third-party CDNs to gain control and performance benefits while reducing CDN spend.
Security in Clouds: Cloud security challenges Software as a
Service Security, Common Standards: The Open Cloud Consortium The Distributed management Task Force Standards for application Developers Standards for Messaging Standards for Security, End user access to cloud computing, Mobile Internet devices and the cloud. Hadoop MapReduce Virtual Box Google App Engine Programming Environment for Google App Engine.
Load balancing is used to distribute workloads across multiple servers in cloud computing. It aims to optimize resource use and minimize response time. The document proposes using a round robin approach to distribute loads from virtual machines across servers periodically to reduce server workload and use networks efficiently. Key benefits outlined are high scalability, availability, and flexibility to balance various protocols and route traffic based on server health. The conclusion states that load balancing is important in cloud computing to distribute work evenly for high user satisfaction and resource utilization, though further research is still needed.
Scheduling refers to allocating computing resources like processor time and memory to processes. In cloud computing, scheduling maps jobs to virtual machines. There are two levels of scheduling - at the host level to distribute VMs, and at the VM level to distribute tasks. Common scheduling algorithms include first-come first-served (FCFS), shortest job first (SJF), round robin, and max-min. FCFS prioritizes older jobs but has high wait times. SJF prioritizes shorter jobs but can starve longer ones. Max-min prioritizes longer jobs to optimize resource use. The choice depends on goals like throughput, latency, and fairness.
This document provides an overview of client-side and server-side scripting languages. It defines scripting languages as programming languages that support writing scripts to create dynamic web pages. Client-side scripting includes JavaScript and happens in the user's browser, while server-side scripting includes PHP and ASP and occurs on the web server. The document compares advantages of each like speed and capabilities, and notes that many sites use both for different purposes like interactivity versus data storage.
This presentation provides an overview of cloud computing, including:
1. Cloud computing allows on-demand access to computing resources like servers, storage, databases, networking, software, analytics and more over the internet.
2. Key features of cloud computing include scalability, availability, agility, cost-effectiveness, and device/location independence.
3. Popular cloud storage services include Google Drive, Dropbox, and Apple iCloud which offer free basic storage with options to pay for additional storage.
This document proposes a load balancing model for public clouds using cloud partitioning. It divides a large public cloud into partitions based on geographic location. When a job arrives, a main controller assigns it to the least loaded partition. Each partition uses algorithms like weighted round robin to further distribute jobs to nodes based on their calculated load degrees. The model aims to improve resource utilization and response times across the large, complex public cloud infrastructure.
The document provides an introduction to basic web architecture, including HTML, URIs, HTTP, cookies, database-driven websites, AJAX, web services, XML, and JSON. It discusses how the web is a two-tiered architecture with a web browser displaying information from a web server. Key components like HTTP requests and responses are outlined. Extension of web architecture with server-side processing using languages like PHP and client-side processing with JavaScript are also summarized.
Unit 3 -Data storage and cloud computingMonishaNehkal
油
Data storage
Cloud storage
Cloud storage from LANs to WANs
Cloud computing services
Cloud computing at work
File system
Data management
Management services
This document discusses distributed databases and distributed database management systems (DDBMS). It defines a distributed database as a logically interrelated collection of shared data physically distributed over a computer network. A DDBMS is software that manages the distributed database and makes the distribution transparent to users. The document outlines key concepts of distributed databases including data fragmentation, allocation, and replication across multiple database sites connected by a network. It also discusses reference architectures, components, design considerations, and types of transparency provided by DDBMS.
The document discusses several security challenges related to cloud computing. It covers topics like data breaches, misconfiguration issues, lack of cloud security strategy, insufficient identity and access management, account hijacking, insider threats, and insecure application programming interfaces. The document emphasizes that securing customer data and applications is critical for cloud service providers to maintain trust and meet compliance requirements.
Virtualization security for the cloud computing technologyDeep Ranjan Deb
油
This document outlines a seminar presentation on virtualization security for cloud computing. It begins with an introduction noting the widespread adoption of virtualization in data centers. The literature survey section summarizes several papers on virtualization security risks and solutions. The presentation defines virtualization and cloud enabling technologies. It discusses why virtualization is important, the architecture of virtualization, and applications of virtualization. The presentation also covers virtualization risks, threats, security approaches, and the issue of virtual machine sprawl. It outlines benefits of virtualization and concludes with future research topics and references.
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
Optimistic concurrency control in Distributed Systemsmridul mishra
油
This document discusses optimistic concurrency control, which is a concurrency control method that assumes transactions can frequently complete without interfering with each other. It operates by allowing transactions to access data without locking and validating for conflicts before committing. The validation checks if other transactions have read or written the same data. If a conflict is found, the transaction rolls back and restarts. The document outlines the basic algorithm, phases of transactions (read, validation, write), and advantages like low read wait time and easy recovery from deadlocks and disadvantages like potential for starvation and wasted resources if long transactions abort.
This document summarizes distributed computing. It discusses the history and origins of distributed computing in the 1960s with concurrent processes communicating through message passing. It describes how distributed computing works by splitting a program into parts that run simultaneously on multiple networked computers. Examples of distributed systems include telecommunication networks, network applications, real-time process control systems, and parallel scientific computing. The advantages of distributed computing include economics, speed, reliability, and scalability while the disadvantages include complexity and network problems.
Virtualization allows multiple operating systems and applications to run on a single server at the same time, improving hardware utilization and flexibility. It reduces costs by consolidating servers and enabling more efficient use of resources. Key benefits of VMware virtualization include easier manageability, fault isolation, reduced costs, and the ability to separate applications.
One can Study the key concept of Virtualization, its types, why Virtualization and what are the use cases and Benefits of Virtualization and example of Virtualization.
Cloud computing allows users to access virtual hardware, software, platforms, and services on an as-needed basis without large upfront costs or commitments. This transforms computing into a utility that can be easily provisioned and composed. The long-term vision is for an open global marketplace where IT services are freely traded like utilities, lowering barriers and allowing flexible access to resources and software for all users.
This document summarizes a dissertation on an improved load balancing technique for secure data in cloud computing. The dissertation discusses research issues in load balancing and data security in cloud computing. It proposes a load balancing methodology that uses a load balancer, Kerberos authentication, and Nginx load balancing algorithms like round robin and least connections to securely store and balance load of encrypted data across multiple cloud nodes. The methodology is implemented using tools like HP LoadRunner, Amazon Web Services, and Jelastic cloud platform. Performance is analyzed in terms of transaction time. The proposed technique aims to improve resource utilization, access control, data security, and efficiency in cloud environments.
Improve Customer Experience with Multi CDN SolutionCloudxchange.io
油
1) Intelligently balancing content delivery among multiple clouds and CDNs using Cedexis' technology can help approach 100% availability by routing around outages.
2) Cedexis' real-user monitoring data and intelligent routing capabilities allow enterprises to control traffic across multiple CDNs and clouds to improve performance and reduce costs.
3) Cedexis helps customers implement hybrid CDN strategies using their own infrastructure like data centers combined with multiple third-party CDNs to gain control and performance benefits while reducing CDN spend.
Load balancing distributes workload and computing resources across multiple servers or devices to improve performance and prevent individual devices from being overloaded. There are different types of load balancing algorithms like static, dynamic, and round robin that assign workloads in different ways. Load balancing in the cloud can occur at the network, HTTP, or application layers and can be implemented using hardware, software, or virtual load balancers to improve flexibility and handle high traffic volumes. The primary goal of load balancing is to improve speed, protect devices, and maintain website traffic as internet usage increases rapidly.
This document discusses different architectural approaches for client-server systems, including 2-tier, 3-tier, and N-tier architectures. A 2-tier architecture consists of clients and a single application server, while 3-tier and N-tier architectures separate functionality into distinct presentation, application processing, and data tiers for improved scalability and flexibility.
Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Enviro...SaikiranReddy Sama
油
In Dynamic Resource Allocation, WE PRESENT A SYSTEM THAT USES VIRTUALIZATION TECHNOLOGY TO ALLOCATE DATA CENTER RESOURCES DYNAMICALLY.
WE INTRODUCE THE CONCEPT OF SKEWNESS.
And BY MINIMIZING SKEWNESS, WE CAN COMBINE DIFFERENT TYPES OF WORKLOADS NICELY AND IMPROVE THE OVERALL UTILIZATION OF SERVER RESOURCES.
WE DEVELOP A SET OF HEURISTICS THAT PREVENT OVERLOAD IN THE SYSTEM EFFECTIVELY WHILE SAVING ENERGY USED.
Dynamic resource Allocation using Virtual Machines For Cloud Computing
Consistency as a Service: Auditing Cloud ConsistencyPapitha Velumani
油
This document discusses consistency as a service (CaaS) for auditing cloud consistency. The CaaS model consists of a large data cloud maintained by a cloud service provider and multiple small audit clouds consisting of user groups. The audit clouds can verify the consistency level promised by the data cloud's service level agreement. A two-level auditing architecture with loosely synchronized clocks allows the audit clouds to quantify consistency violations and devise strategies to reveal many violations. Extensive experiments validated the heuristic auditing strategy.
This document discusses cloud computing concepts including definitions, essential characteristics of abstraction and virtualization, benefits such as on-demand access and elastic resources, and how virtualization enables key attributes like scalability. It provides examples of Google, Microsoft Azure, and Amazon Web Services cloud platforms. Load balancing is described as a way to distribute requests across virtualized resources to optimize performance and avoid overloads. More advanced load balancers can monitor resource health and workload to intelligently assign tasks.
This document discusses cloud computing concepts including its key characteristics, service models, and deployment models. Cloud computing refers to applications and services delivered over the internet using shared computing resources. The main advantages of cloud computing are no upfront investment in servers or software, flexibility, scalability, and pay-per-use models. The three service models are Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The four deployment models are private cloud, public cloud, hybrid cloud, and community cloud. Security and programmability are ongoing challenges that cloud computing aims to address through standardization.
E VALUATION OF T WO - L EVEL G LOBAL L OAD B ALANCING F RAMEWORK IN C L...ijcsit
油
With technological advancements and c
onstant changes of Internet, cloud computing has been today's
trend. With the lower cost and convenience of cloud computing services, users have increasingly put
their
Web resources and information in the cloud environment. The availability and reliability
of the client
systems will become increasingly important. Today cloud applications slightest interruption, the imp
act
will be significant for users. It is an important issue that how to ensure reliability and stability
of the cloud
sites. Load balancing w
ould be one good solution.
This paper presents a framework for global server load balancing of the Web sites in a cloud with tw
o
-
level
load balancing model. The proposed framework is intended for adapting an open
-
source load
-
balancing
system and the frame
work allows the network service provider to deploy a load balancer in different data
centers dynamically while the customers need more load balancers for increasing the availability
Cloud computing allows users to access data and programs over the internet rather than on a local hard drive. Amazon Web Services (AWS) is a major provider of cloud computing infrastructure and services. A case study describes how Netflix uses AWS to host its video streaming platform, taking advantage of AWS's scalable and cost-effective resources. The document discusses concepts of cloud computing and outlines some of AWS's core services like EC2, S3, and advantages they provide to users.
This document discusses various cloud computing architectures including workload distribution, cloud bursting, elastic disk provisioning, resource pooling, dynamic failure detection and recovery, and capacity planning architectures. It also covers cloud mechanisms like automated scaling listeners, load balancers, pay-per-use monitors, audit monitors, service level agreements (SLAs), and fail-over systems that are important components of cloud architectures. The key cloud architectures aim to optimize resource utilization, enable horizontal and vertical scaling, provide high availability, and implement billing and monitoring functions.
This document discusses cloud computing concepts including definitions, architecture, service models, and simulation tools. It summarizes a student project presentation on cloud computing that examines key aspects like scalability, pay-per-use model, and virtualization. It also evaluates cloud simulators CloudSim, GreenCloud and iCanCloud, comparing their features, scenarios and performance graphs. The document proposes a novel load balancing approach and its implementation through a dynamic information system interface.
This document discusses resource management and security in cloud computing. It covers topics such as inter-cloud resource management, resource provisioning methods, global exchange of cloud resources, and security challenges in cloud computing. Specifically, it discusses demand-driven, event-driven and popularity-driven methods for resource provisioning in clouds. It also summarizes proposed architectures for global exchange of cloud resources across geographic locations. Finally, it outlines some key security concerns for cloud computing like data breaches and the shared responsibility model between cloud providers and customers for security.
This document presents a scalable approach to quantify availability in large-scale Infrastructure as a Service (IaaS) clouds. It models component failures using three pools - hot, warm, and cold. Dependencies between pools are resolved using fixed-point iteration. It compares analytic-numeric solutions from the proposed interacting Markov chain approach to monolithic models. The document also discusses optimizing data replication in clouds to minimize violations of applications' quality of service requirements. It formulates the problem as an integer program and proposes transforming it to a minimum-cost maximum-flow problem to find optimal solutions efficiently.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
油
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Cloud computing provides on-demand access to shared computing resources like networks, servers, storage, applications and services over the internet. It has seen rapid growth in recent years. There are different service models like Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) depending on what capabilities are provided to the user. Cloud computing can be deployed using private, public, hybrid or community models depending on who manages the infrastructure and who has access to it. While cloud computing provides benefits like flexibility, scalability and cost savings, concerns around security, privacy and reliability remain challenges to adoption.
The document discusses cloud computing concepts, architectures, and research challenges. It describes the key layers of cloud computing including hardware, infrastructure, platform, and application layers. It also discusses cloud service models (IaaS, PaaS, SaaS), types of clouds (public, private, hybrid), and characteristics. Several research challenges are outlined including automated provisioning, VM migration, server consolidation, traffic management, data security, and developing efficient software frameworks and storage technologies for cloud environments.
A Distributed Control Law for Load Balancing in Content Delivery NetworksSruthi Kamal
油
1. The document presents a novel load balancing algorithm for content delivery networks that aims to minimize load imbalance and metric movement costs.
2. It proposes estimating system state through probability distributions of node capacities and load to help peers schedule transfers without centralized control.
3. Each peer independently manipulates partial system information and reassigns virtual servers based on the approximated system state.
Cloud computing is an on demand service in which shared resources, information, software and other devices are provided to the end user as per their requirement at a specific time. A cloud consists of several elements such as clients, datacenters and distributed servers. There are n number of clients and end users involved in cloud environment. These clients may make requests to the cloud system simultaneously, making it difficult for the cloud to manage the entire load at a time. The load can be CPU load, memory load, delay or network load. This might cause inconvenience to the clients as there may be delay in the response time or it might affect the performance and efficiency of the cloud environment. So, the concept of load balancing is very important in cloud computing to improve the efficiency of the cloud. Good load balancing makes cloud computing more efficient and improves user satisfaction. This paper gives an approach to balance the incoming load in cloud environment by making partitions of the public cloud.
4. ABSTRACT
The concept of Cloud computing
has significantly changed the
field of parallel and distributed
computing systems today.
Cloud computing enables a wide
range of users to access
distributed, scalable, virtualized
hardware and/or software
infrastructure over the Internet.
4
5. ABSTRACT
Load balancing is a methodology
to distribute workload across
multiple computers, or other
resources over the network links
to achieve optimal resource
utilization, maximize
throughput, minimum response
time, and avoid overload.
5
6. ABSTRACT
With recent beginning of technology, resource control or load
balancing in cloud computing is main challenging issue.
Efficient load balancing scheme ensures efficient resource
utilization by provisioning of resources to cloud users on-demand
basis in pay-as-you-say-manner.
Load balancing in the cloud computing environment has an
important impact on the performance.
Good load balancing makes cloud computing more ef鍖cient and
improves user satisfaction.
6
7. MOTIVATION
PROBLEM
I. Cloud Computing is a new trend emerging in
IT environment with huge requirements of
infrastructure and resources.
II. Availability of cloud systems is one of the
main concerns of cloud computing. The
term, availability of clouds, is mainly
evaluated by type of information comparing
with resource scaling.
III. Work load control is crucial to improve
system performance and maintain stability.
PROBLEM
SOLUTION
RESULT
7
8. MOTIVATION
SOLUTION
I. Load balancing in cloud computing provides
an efficient solution to various issues residing
in cloud computing environment set-up and
usage.
II. Load balancing must take into account two
major tasks, one is the resource provisioning
or resource allocation and other is task
scheduling in distributed environment.
PROBLEM
SOLUTION
RESULT
8
9. MOTIVATION
RESULT
Efficient provisioning of resources and scheduling of
resources as well as tasks will ensure:
I. Resources are easily available on demand.
II. Resources are efficiently utilized under
condition of high/low load.
III. Energy is saved in case of low load (i.e. when
usage of cloud resources is below certain
threshold).
IV. Cost of using resources is reduced.
PROBLEM
SOLUTION
RESULT
9
10. CLOUD COMPUTING
A Cloud computing is emerging as
a new paradigm of large scale
distributed computing.
It has moved computing and data
away from desktop and portable
PCs, into large data centres.
10
11. CLOUD COMPUTING
Cloud Computing is made up
by aggregating two terms in
the field of technology.
First term is Cloud and the
second term is computing.
11
12. CLOUD COMPUTING
What is Cloud?
Cloud is a pool of heterogeneous
resources.
It is a mesh of huge infrastructure
and has no relevance with its name
Cloud.
Infrastructure refers to both the
applications delivered to end users
as services over the Internet and the
hardware and system software in
data centres that is responsible for
providing those services.
12
14. CLOUD COMPUTING
SAAS
E-Mail
ERP
CRM
Collaborative
PAAS
Application
Development
Web
Decision Support
Streaming
IAAS
Caching
File
System
Management
Networking
Security
14
15. TYPES OF CLOUD COMPUTING
PRIVATECLOUD
For people who are the type to keep
everything within arms reach and
on a leash -- dogs, children, keys,
you name it.
Afraid of releasing your data to
public cloud?
Need to constantly monitor it?
These chained-in, restrained cloud
environments are protected behind
a firewall
15
16. TYPES OF CLOUD COMPUTING
PUBLIC CLOUD
If private clouds are like pets
on leashes, public clouds are
wild animals roaming free.
Public clouds owners are those
who are willing to trust data to
off-premises cloud providers.
They gain the benefits of that
pay-as-you-go services so you
only pay for what you use
16
17. LOAD BALANCING
Load balancing is a relatively new technique that facilitates networks
and resources by providing a maximum throughput with minimum
response time.
Dividing the traffic between servers, data can be sent and received
without major delay.
Different kinds of algorithms are available that helps traffic loaded
between available servers.
Without load balancing, users could experience delays, timeouts and
possible long system responses.
17
19. LOAD BALANCING ALGORITHMS
STATIC ALGORITHMS DYNAMIC ALGORITHMS
Cloud provider installs
heterogeneous resources.
resources are flexible in dynamic
environment.
Cloud cannot rely on the prior
knowledge whereas it takes into
account run-time statistics.
Cloud provider installs
homogeneous resources.
Resources in the cloud are not
flexible
Cloud requires prior knowledge of
nodes capacity, processing power ,
memory, performance and statistics
of user requirements.
19
20. Start
Request a connection
to a resource
Stop
Chose the next
available resource
Retrieve highest
priority conn. string
Return resource to
requestor
Collect usage
patterns
Data
resource available
More
available resource
YES
YES
NO
NO
20
22. CLIENT SIDE LOAD BALANCER
Load balancer forwards packets to web servers according to
different workloads on servers.
However, it is hard to implement a scalable load balancer because
of both the clouds commodity business model and the limited
infrastructure control allowed by cloud providers.
Client-side Load Balancer (CLB) solve this problem by using a
scalable cloud storage service.
CLB allows clients to choose back-end web servers for dynamic
content although it delivers static content.
22
23. EXISTING SOLUTIONS
Hardware based load balancer- to handle high level of
load
Software based load balancer- for generic servers
Load Balancer
Local DNS Server
Can hand out diff. IP address to diff. DNS Servers
DNS Load
Balancing
One can fully control infrastructure
Do not impose: Single Performance bottleneck,
expensive Hardware, Lack of adaptiveness
Layer 2
Optimization
23
24. EXISTING SOLUTIONS : DRAWBACKS
Hardware based load balancer- Expensive
Software based load balancer-Not ScalableLoad Balancer
Local DNS Server
lack of Adaptiveness and Granularity
DNS Load
Balancing
One can fully control infrastructure
this ability could open doors for security exploits.
Layer 2
Optimization
24
25. PROPOSED SYSTEM
architecture has no single point of scalability bottleneck
communication flows between the browser and the
chosen back-end web server.
Compared to Software
Load Balancer
architecture has a finer load balancing granularity and
adaptiveness.
client's browser makes the decision.
Compared to DNS Load
Balancing
It achieve high scalability without requiring sophisticated
control on the infrastructure as layer 2 optimization does
Compared to Layer 2
Optimization
25
27. EXPLANATION
For each dynamic page, we create an anchor
static page. This anchor page includes two
parts.
The first part contains the list of web servers'
IP addresses and their individual load (such
as CPU, memory, network bandwidth, etc.)
information. They are stored in a set of
JavaScript variables that can be accessed
directly from other JavaScript code.
The second part contains the client-side load
balancing logic, again written in JavaScript.
Static
Page
Web
Server
Load
File
LB
JavaScript
27
29. AMAZON - S3 (SIMPLE STORAGE SERVICE)
S3's domain hosting capability is used, which maps a domain (by
DNS aliasing) to S3. We create a bucket with the same name as the
domain name (e.g., www.website.com).
When a user accesses the domain, the request is routed to S3 and
S3 uses the host header to determine the bucket from which to
retrieve the content.
29
30. SAMPLE ANCHOR STATIC PAGE
1. <html>
2. <head><title></title>
3. <script type="text/javascript">
4. <!--
5. // the load balancing logic
6. </script>
7. <script type="text/javascript">
8. // the server list and load
9. // information in JavaScript
10. // variables
11. </script></head>
12. <body onLoad="load();">
13. <span id="ToBeReplaced"> </span>
14. </body></html>
30
31. WORKING
When a client browser loads an anchor page, the browser executes
the following steps in JavaScript:
1. Examine the load variables to determine to which web server it
should send the actual request. The current algorithm randomly
chooses a web server where the probability of choosing any one is
inversely proportional to its relative load.
31
32. WORKING
2. JavaScript sends a request to a proxy on
the target web server. The JavaScript
sends over two pieces of information
encoded as URL parameters. First, it
sends the browser cookie associated
with the site (document.cookie).
Second, it sends the URL path
(location.pathname).
One Web Server
Proxy
Normal
Web
Contents
32
33. WORKING
3. The proxy uses the cookie and URL path to re-construct a new
HTTP request; and sends the request to the actual web server.
4. The web server processes the request; invokes dynamic script
processor as necessary; and returns the result back to the proxy.
5. The proxy wraps around the result in a JavaScript.
6. The client browser executes the returned JavaScript from the
proxy; updates the page display; and updates the cookies if a set-
cookie header has been returned.
33
34. SAMPLE JAVASCRIPT
The JavaScript returned from the proxy looks like the following.
1. function page() {
2. return "<HTML page content>";
3. }
4. function set-cookie() {
5. <set cookie if instructed
6. by web server>
7. }
34
35. WORKING
As described above, the goal of the load balancing logic is to choose
a back-end web server based on the load, send the client request
(cookie along with URL path) to the proxy, receive the returned
JavaScript and update the current HTML page.
35
36. Service specific load balancing
ADVANTAGES
Wide area services
Scalable Services
Client Code Portability
Parallelism
Fault Tolerance
36
37. CONCLUSION
A cloud is an attractive infrastructure solution for web applications since
it enables web applications to dynamically adjust its infrastructure
capacity on demand.
A scalable load balancer is a key building block to efficiently distribute a
large volume of requests and fully utilize the horizontally scaling
infrastructure cloud.
However, as we pointed out, it is not trivial to implement a scalable load
balancer due to both the cloud's commodity business model and the
limited infrastructure control allowed by cloud providers.
In this study, the Client-side Load Balancing (CLB) architecture uses a
scalable cloud storage service such as Amazon S3.
37
38. CONCLUSION
Through S3, CLB directly delivers static contents while allowing a client to
choose a corresponding back-end web server for dynamic contents.
A client makes the load balancing decision based on the list of back-end
web servers and their load information.
CLB uses JavaScript to implement the load balancing algorithm, which
not only makes the load balancing mechanism transparent to users, but
also gets around browsers' cross-domain security limitation.
Our evaluation shows that the proposed architecture is scalable while
introducing a small latency overhead.
38
39. REFERENCES
Amazon Web Services. Amazon Web Services (AWS).
http://aws.amazon.com.
F5 Networks. F5 Networks. http://www.f5.com.
Google Inc. Google Search Engine.
IEEE Paper on Client Side Load Balancer Using Cloud by S. Wee &
H. Liu
39