Introduction
A load balancer helps distribute tasks between one or more machines. A load balancer often issues requests to one or more web servers; however, web technologies are not the only thing load balancers work with. Load balancers use health checks to determine which node they send traffic to based on the availability of each node. A health check can either be HTTP, HTTPS or TCP based. This document covers a health check in depth.
Adding a Load Balancer
Adding a load balancer is simple. You can add a load balancer in one step. Click the blue + and choose Add Load Balancers. Next, select the location, give it a basic configuration, add any initial forwarding rules, choose the VPC network associated with the load balancer, associate any firewall rules, and add health checks. See our Load Balancer Feature Reference to learn about limits to the number of forwarding rules.
Adding Health Checks
Add Health Checks on the main Add Load Balancer page or on the Manage Load Balancer -> Configuration -> Health Checks page of each individual load balancer. You can also manage the health checks using the Rcs API
Protocol
The protocol must be:
You can set the port you wish to probe along with the protocol.
If you set the protocol to HTTP or HTTPS, the load balancer requires a return code of 200 for success.
TCP checks for an open port to return success.
Interval Timeout
The Interval Timeout is the number of seconds between checks. The default value is 15. The health check uses the protocol and port to check the availability based on this frequency. The maximum value for this setting is 300 seconds.
Response Timeout
The Response Timeout defines the number of seconds before a check fails. This value is 5 seconds by default. A request that takes 6 seconds, therefore, is a failure. The Interval Timeout defines the next check. The maximum value for this setting is 300 seconds.
Unhealthy Threshold
The Unhealthy Threshold defines how many consecutive times a check must fail for a node to become unhealthy. When a node gets marked unhealthy it no longer has traffic passed to it. The attempts must be consecutive. For example, if the value is 5 and 3 successful attempts pass, one fails and two more fail, the node is still marked active.
Healthy Threshold
The Health Threshold, like the Unhealthy Threshold defines how many attempts after required for a node to be healthy. Like the Unhealthy Threshold, the attempts must be consecutive for the node to serve traffic as an active participant.
HTTP Path
When setting the protocol to HTTP or HTTPS, this is a relative path to check. If your application runs on https://www.example.com/hello.php
, enter /hello.php
.
Putting It All Together
Load Balancers usually sit between your firewall and the internet. They can help ensure that your application is up and running before sending traffic to it. They can also ensure that the same server responds to the same user to ensure a seamless experience. When you want to update your applications, having a pool of servers behind a load balancer allows you to update individual servers while leaving the core application online, served by other servers. Primarily used to help route web traffic, Load Balancers can also check open ports using the Transmission Control Protocol (TCP). A Load Balancer, while simple, can have complex parameters, from the protocol to the thresholds and the time limits between checks and failures.
References
Introduction
A load balancer helps distribute tasks between one or more machines. A load balancer often issues requests to one or more web servers; however, web technologies are not the only thing load balancers work with. Load balancers use health checks to determine which node they send traffic to based on the availability of each node. A health check can either be HTTP, HTTPS or TCP based. This document covers a health check in depth.
Adding a Load Balancer
Adding a load balancer is simple. You can add a load balancer in one step. Click the blue + and choose Add Load Balancers. Next, select the location, give it a basic configuration, add any initial forwarding rules, choose the VPC network associated with the load balancer, associate any firewall rules, and add health checks. See our Load Balancer Feature Reference to learn about limits to the number of forwarding rules.
Adding Health Checks
Add Health Checks on the main Add Load Balancer page or on the Manage Load Balancer -> Configuration -> Health Checks page of each individual load balancer. You can also manage the health checks using the Rcs API
Protocol
The protocol must be:
HTTP
HTTPS
TCP
You can set the port you wish to probe along with the protocol.
If you set the protocol to HTTP or HTTPS, the load balancer requires a return code of 200 for success.
TCP checks for an open port to return success.
Interval Timeout
The Interval Timeout is the number of seconds between checks. The default value is 15. The health check uses the protocol and port to check the availability based on this frequency. The maximum value for this setting is 300 seconds.
Response Timeout
The Response Timeout defines the number of seconds before a check fails. This value is 5 seconds by default. A request that takes 6 seconds, therefore, is a failure. The Interval Timeout defines the next check. The maximum value for this setting is 300 seconds.
Unhealthy Threshold
The Unhealthy Threshold defines how many consecutive times a check must fail for a node to become unhealthy. When a node gets marked unhealthy it no longer has traffic passed to it. The attempts must be consecutive. For example, if the value is 5 and 3 successful attempts pass, one fails and two more fail, the node is still marked active.
Healthy Threshold
The Health Threshold, like the Unhealthy Threshold defines how many attempts after required for a node to be healthy. Like the Unhealthy Threshold, the attempts must be consecutive for the node to serve traffic as an active participant.
HTTP Path
When setting the protocol to HTTP or HTTPS, this is a relative path to check. If your application runs on https://www.example.com/hello.php, enter /hello.php.
Putting It All Together
Load Balancers usually sit between your firewall and the internet. They can help ensure that your application is up and running before sending traffic to it. They can also ensure that the same server responds to the same user to ensure a seamless experience. When you want to update your applications, having a pool of servers behind a load balancer allows you to update individual servers while leaving the core application online, served by other servers. Primarily used to help route web traffic, Load Balancers can also check open ports using the Transmission Control Protocol (TCP). A Load Balancer, while simple, can have complex parameters, from the protocol to the thresholds and the time limits between checks and failures.
References
Rcs Load Balancer Feature Reference
Rcs Load Balancers
Load Balancing Like A Pro