Load balancer and Global Load Balancer
Load Balancer, High Availability, Fault tolerance, Failover capability
LOAD BALANCER
A Load Balancer is a network device. Load balancer refers to load sharing (i.e. share or distribute the user requests, connections), among individual servers or nodes in a cluster, typically in the same data center, LAN or across different data centers. Aim is to provide continuous services in case of failure of a node in a cluster, share and distribute load, connections between the nodes.
So when referring to Load Balancer it could provide High Availability in the same LAN or data center or across different data centers. (See below note on Load Balancer in Oracle Identity Management components)
Typical use of Load Balancer is to front-end two or more servers where the Load Balancer device receives the initial request and sends the request to its pool of servers, and returns the server's response to the client. But why would one require to front-end servers with another device in the first place? The answer is simple- by front-ending your application servers (with the LB device), all clients send their request to this front-end device and in turn the requests are routed to the servers in the pool. Now since you have the freedom to have a pool of servers to service the requests, you can not only distribute load of the requests across your configured servers in the pool, but also introduce redundancy for your application service. Say you have 3 servers in the pool. Now you can take one server off-line for maintenance or patching etc, and your pool still has 2 servers available to service client requests. The clients only communicate with the LB device and do not have any knowledge of the backend servers in the pool. So not only you got to distribute load among your application servers but effortlessly got redundancy as well. The Load Balancer device hosts the IP address for the published hostname of the Application server and to the clients it appears to be the one responding to all the client requests. In turn the Load Balancer device redirects the request to the backend servers configured in its pool of servers. Now how it sends the received request to one of the server in its pool of servers depends upon the type of algorithm or load balancing technique configured. The intent is to route the received client request based on a criteria to one of the server. It could be based as simply as Round Robin, meaning the first received request is sent to the first server in the pool, second request to the second server in the pool, and then to the next server and so on, until the last server available in the pool. Then again the requests are sent starting from the first server. Here the idea is simple to distribute the load equally among all the servers. This technique can be effectively adopted when all the servers in the pool are of same specifications. However, this protocol does not take into account the current server load and will continue to send the requests as per the configured sequence in the round-robin.
GLOBAL LOAD BALANCER
Global Load Balancer is also a load balancer but is meant for balancing or routing requests across Data Centers. A Global Load Balancer (GSLB - Global server load balancer) refers to a fail-over load balancer between different data centers. Typical scenarios being a Multi Data Center (MDC) configuration.
Both LB and GLB are for providing redundancy, the difference being LB is in the same LAN or data center whereas GLB is across distant data center or sites.
GEO LOAD BALANCING
Geo Load Balancing improves on the "traditional" GSLB by taking into account the visitor’s geographic point of access and routing the user request, to the best-suited data center or application instance, be it geographic distance or any other criteria.
In addition to the clear performance benefits, geo load balancing enables the delivery of location-specific content and services, leveraging existing CDN technology.
With geo distribution, for example, a website owner can deliver location-specific content, advertisements, and offers that are relevant an individual customer.
Alternately, the application administrator could roll out a new version only to a select set of customers, in order to avoid disrupting the entire user base.
Oracle Identity Manager
LOAD BALANCER
A Load Balancer is a network device. Load balancer refers to load sharing (i.e. share or distribute the user requests, connections), among individual servers or nodes in a cluster, typically in the same data center, LAN or across different data centers. Aim is to provide continuous services in case of failure of a node in a cluster, share and distribute load, connections between the nodes.
So when referring to Load Balancer it could provide High Availability in the same LAN or data center or across different data centers. (See below note on Load Balancer in Oracle Identity Management components)
Typical use of Load Balancer is to front-end two or more servers where the Load Balancer device receives the initial request and sends the request to its pool of servers, and returns the server's response to the client. But why would one require to front-end servers with another device in the first place? The answer is simple- by front-ending your application servers (with the LB device), all clients send their request to this front-end device and in turn the requests are routed to the servers in the pool. Now since you have the freedom to have a pool of servers to service the requests, you can not only distribute load of the requests across your configured servers in the pool, but also introduce redundancy for your application service. Say you have 3 servers in the pool. Now you can take one server off-line for maintenance or patching etc, and your pool still has 2 servers available to service client requests. The clients only communicate with the LB device and do not have any knowledge of the backend servers in the pool. So not only you got to distribute load among your application servers but effortlessly got redundancy as well. The Load Balancer device hosts the IP address for the published hostname of the Application server and to the clients it appears to be the one responding to all the client requests. In turn the Load Balancer device redirects the request to the backend servers configured in its pool of servers. Now how it sends the received request to one of the server in its pool of servers depends upon the type of algorithm or load balancing technique configured. The intent is to route the received client request based on a criteria to one of the server. It could be based as simply as Round Robin, meaning the first received request is sent to the first server in the pool, second request to the second server in the pool, and then to the next server and so on, until the last server available in the pool. Then again the requests are sent starting from the first server. Here the idea is simple to distribute the load equally among all the servers. This technique can be effectively adopted when all the servers in the pool are of same specifications. However, this protocol does not take into account the current server load and will continue to send the requests as per the configured sequence in the round-robin.
GLOBAL LOAD BALANCER
Global Load Balancer is also a load balancer but is meant for balancing or routing requests across Data Centers. A Global Load Balancer (GSLB - Global server load balancer) refers to a fail-over load balancer between different data centers. Typical scenarios being a Multi Data Center (MDC) configuration.
Both LB and GLB are for providing redundancy, the difference being LB is in the same LAN or data center whereas GLB is across distant data center or sites.
GEO LOAD BALANCING
Geo Load Balancing improves on the "traditional" GSLB by taking into account the visitor’s geographic point of access and routing the user request, to the best-suited data center or application instance, be it geographic distance or any other criteria.
In addition to the clear performance benefits, geo load balancing enables the delivery of location-specific content and services, leveraging existing CDN technology.
With geo distribution, for example, a website owner can deliver location-specific content, advertisements, and offers that are relevant an individual customer.
Alternately, the application administrator could roll out a new version only to a select set of customers, in order to avoid disrupting the entire user base.
Oracle Identity Manager
Oracle Identity Manager High Availability architecture is built on top of the underlying Oracle WebLogic cluster. If you are following the Oracle OIM HA deployment then you install OIM on the WebLogic server. Since OIM is a Java EE application it is deployed on WebLogic server. Now in a HA deployment, OIM is deployed on a WebLogic server cluster which has at least two servers which are members of the cluster. (you could include more than two servers depending upon your environment load/number of users/number of connections etc requirements). So OIM High Availability is actually leveraging the High Availability of the underlying WebLogic cluster. WebLogic cluster is providing the HA features, i.e. in case of failures, all session information and state is available to other members of the cluster nodes.
There are two ways one could distribute load between the two OIM nodes in the OIM cluster? First is using the WebLogic cluster itself which manages the distribution of load or user connection requests among its nodes. And the second way is to use an external Load Balancer which would send user requests/connections to the members of OIM cluster nodes/hosts, depending upon the load distribution algorithm used on the Load Balancer.
Oracle implementation of MDC
Oracle Multi Data Center ImplementationBusiness Continuity and Disaster Recovery
Difference between RTO and RPO (Recovery Time Objective and Recovery Point Objective)
There is a good chance that you would like to see your business survive any future disaster, and any problems that follow as well. While it is nearly impossible to predict what the next disaster will be, it’s easy to prepare for, especially if you have an effective business continuity plan. When it comes to these plans, there are many key metrics you need to be aware of and the most important two are RTO and RPO.
While both RTO and RPO are important elements of continuity plans, and they both sound fairly similar, they are actually quite different. In this article we define RTO and RPO and take a look at what the difference is between the two concepts.
RTO defined
RTO, or Recovery Time Objective, is the target time you set for the recovery of your IT and business activities after a disaster has struck. The goal here is to calculate how quickly you need to recover, which can then dictate the type or preparations you need to implement and the overall budget you should assign to business continuity.
If, for example, you find that your RTO is five hours, meaning your business can survive with systems down for this amount of time, then you will need to ensure a high level of preparation and a higher budget to ensure that systems can be recovered quickly. On the other hand, if the RTO is two weeks, then you can probably budget less and invest in less advanced solutions.
RPO defined
RPO, or Recovery Point Objective, is focused on data and your company’s loss tolerance in relation to your data. RPO is determined by looking at the time between data backups and the amount of data that could be lost in between backups.
As part of business continuity planning, you need to figure out how long you can afford to operate without that data before the business suffers. A good example of setting an RPO is to imaging that you are writing an important, yet lengthy, report. Think to yourself that eventually your computer will crash and the content written after your last save will be lost. How much time can you tolerate having to try to recover, or rewrite that missing content?
That time becomes your RPO, and should become the indicator of how often you back your data up, or in this case save your work. If you find that your business can survive three to four days in between backups, then the RPO would be three days (the shortest time between backups).
What’s the main difference between RTO and RPO?
The major difference between these two metrics is their purpose. The RTO is usually large scale, and looks at your whole business and systems involved. RPO focuses just on data and your company’s overall resilience to the loss of it.
While they may be different, you should consider both metrics when looking to develop an effective BCP. If you are looking to improve or even set your RTO and RPO, contact us today to see how our business continuity systems and solutions can help.
Hey, If you want learn about PMP then you must have read this - PMBOK® Guide: Top Differences Between PMBOK 7th vs 6th Edition!
ReplyDelete