Knowledgebase

How to use a Load Balancer with VKE Print

  • 0

Introduction

Rcs Load Balancer is a fully-managed solution to distribute traffic to groups of servers which decouples the availability of a backend service from the health of a single server. Rcs Load Balancer ensures your service stay online by distributing the load across multiple servers to ensure that servers don't get overloaded.

If you are new to Rcs Load Balancers, you should read the Load Balancer Quickstart Guide first.

Rcs Kubernetes Engine (VKE) is a fully-managed Kubernetes product. When deploying an application to VKE, Kubernetes automatically spreads Pods across different nodes in a cluster for better availability.

Rcs Load Balancers are compatible with VKE to distribute traffic across multiple Pods in different nodes. Rcs Load Balancer in VKE offers all the same features and capabilities as the fully-managed solution for standalone scenarios.

This guide explains how to deploy and configure Rcs Load Balancers in Rcs Kubernetes Engine (VKE) with detailed configuration information.

Prerequisites

Before you begin, you should:

  • Deploy a Rcs Kubernetes Cluster with at least three nodes.
  • Configure kubectl and git in your machine.
  • Have a domain name if you want to follow the TLS/SSL certificates sections.

1. Deploy Web Servers

This section shows how to deploy web servers to the Kubernetes cluster using a Deployment. The web server in this article is a Python web server that returns the hostname of the pod and HTTP request headers.

This example application has a public Docker image (quanhua92/whoami) in the Docker Hub. You can go to this GitHub repository to see the application's source code.

  1. Create a file named deployment.yaml with the following content:

     apiVersion: apps/v1
     kind: Deployment
     metadata:
         name: whoami
     spec:
         replicas: 3
         selector:
             matchLabels:
                 name: whoami
         template:
             metadata:
                 labels:
                     name: whoami
             spec:
                 containers:
                     - name: whoami
                       image: quanhua92/whoami:latest
                       imagePullPolicy: Always
                       ports:
                           - containerPort: 8080
  2. Deploy the application using kubectl

     $ kubectl apply -f deployment.yaml

Notice that the deployment name in this example is whoami, and the Pod listens to requests on the port 8080.

2. Deploy a Load Balancer for HTTP traffic

This section shows how to deploy a load balancer for HTTP traffic on port 80. You deploy a Kubernetes Service with the LoadBalancer type and use metadata annotations to configure VKE Load Balancer.

The default load-balancing algorithm is the Round Robin algorithm. This works by using each server behind the load balancer in turns.

  1. Create a file named service.yaml with the following content. The app selector whoami matches the existing deployment, and the target port 8080 matches the container port in the previous step.

     apiVersion: v1
     kind: Service
     metadata:
         name: whoami-lb
         annotations:
             service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
     spec:
         type: LoadBalancer
         selector:
             name: whoami
         ports:
             - name: http
               port: 80
               targetPort: 8080
  2. Deploy the Service using kubectl

     $ kubectl apply -f service.yaml
  3. Run the following command to see the VKE Load Balancer setup progress:

     $ kubectl get service whoami-lb -w

The result should look like:

NAME        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
whoami-lb   LoadBalancer   10.108.167.185   <pending>     80:32365/TCP   9s
whoami-lb   LoadBalancer   10.108.167.185   139.180.143.107   80:32365/TCP   81s

You can also go to the Load Balancer page in the Customer Portal to inspect your Load Balancers.

You can navigate to the IP address of your Load Balancer to access the application.

Notice that it may take a few minutes before you can access the application through the Load Balancer IP address.

The response of the application should look like:

Hostname: whoami-84798c47cd-2gnhd
Host: 139.180.143.107
Cache-Control: max-age=0
Dnt: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,vi;q=0.8,la;q=0.7,nl;q=0.6
Cookie: session=eyJteV9zZXNzaW9uIjoiVDk4UVUifQ.YoqD3g.o1pyE6s6vTkQqnbvPhG08_6tvOI
X-Forwarded-Proto: http
X-Forwarded-For: 113.172.203.231
Connection: close
Session: T98QU

Refresh the website a few times. Notice that the Hostname changes after a few requests, meaning the Load Balancer can distribute the traffic across multiple pods.

3. Using the Least Connections Load Balancing Algorithm

The least connections load balancing algorithm is a dynamic load balancing algorithm that distributes the client requests to the application server with the least number of active connections at the time the load balancer receives the client request. This algorithm works best in environments where the application servers have similar capabilities.

  1. Change the service.yaml as follows:

     apiVersion: v1
     kind: Service
     metadata:
         name: whoami-lb
         annotations:
             service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
             service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
     spec:
         type: LoadBalancer
         selector:
             name: whoami
         ports:
             - name: http
               port: 80
               targetPort: 8080
  2. Deploy the Service using kubectl

     $ kubectl apply -f service.yaml

4. Configure Health Check on the VKE Load Balancer

Rcs Load Balancers provides health checks to determine if the application servers respond to client requests. Here are some configurations that you can customize:

Here are some configurations that you can customize:

  • healthcheck-protocol: The protocol that load balancers use to perform health check. Two possible values are tcp and http. The default value is tcp
  • healthcheck-path: The URL path that load balancers use to check on the application server. The default value is the root path, /.
  • healthcheck-port: The port that load balancers use to check on the application server. The Kubernetes defines this value. You should not change this value in normal scenarios.
  • healthcheck-check-interval: The interval between health checks in seconds. The default value is 15.
  • healthcheck-response-timeout: The response timeout in seconds. The default value is 5.
  • healthcheck-unhealthy-threshold: The number of unhealthy requests before the load balancer removes the application server from the server pool. The default value is 5.
  • healthcheck-healthy-threshold: The number of healthy requests before load balancers adds the application server back to the server pool. The default value is 5.

The example application in this article has an endpoint for the health check, /health. The benefit of using /health instead of / is that:

  • You can reduce the computation required to run the health check.
  • You can reduce the response time and content length.

In the example application code, the endpoint returns an empty response with 200 status without any complex computation.

  1. Change the service.yaml as follows:

     apiVersion: v1
     kind: Service
     metadata:
         name: whoami-lb
         annotations:
             service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
             service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-protocol: "http"
             service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-path: "/health"
             service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-check-interval: "10"
             service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-response-timeout: "5"
             service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-unhealthy-threshold: "5"
             service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-healthy-threshold: "5"
     spec:
         type: LoadBalancer
         selector:
             name: whoami
         ports:
             - name: http
               port: 80
               targetPort: 8080
  2. Deploy the Service using kubectl

     $ kubectl apply -f service.yaml

5. Expose the Application With Free TLS/SSL Certificates from Let's Encrypt

This section shows how to deploy a load balancer for HTTPS traffic on port 443.

Here are some approaches to obtaining TLS/SSL Certificates:

  • Self-Signed Certificates: Use your own Certificate Authority to create and sign TLS/SSL certificates. This is a great option for development environments.
  • Purchase TLS/SSL Certificates: You need to buy a TLS/SSL certificate from a well-known Certificate Authority for production use-cases.
  • Use Free TLS/SSL Certificates: Use free TLS/SSL certificates from Let's Encrypt or ZeroSSL.

In this section, you install NGINX Ingress Controller to handle incoming SSL/TLS traffic and Cert Manager to manage free TLS/SSL certificates from Let's Encrypt.

NGINX Ingress Controller creates a LoadBalancer service to handle incoming traffic. This LoadBalancer service is also a VKE Load Balancer, so you don't need the service created in the previous sections.

VKE Load Balancer routes incoming traffics to a pool of server nodes. Then, each server node routes the load into the NGINX Ingress Controllers. Each NGINX Ingress Controller routes the requests into the corresponding application pods.

By default, there is only one NGINX Ingress Controller. You can scale the NGINX Ingress Controller depending on the traffic of your system.

Cert Manager automates the creation and management of TLS/SSL certificates from various issuing sources, including Let's Encrypt, HashiCorp Vault, Venafi, and private public key infrastructure.

You need a domain name to issue and manage free TLS/SSL Let's Encrypt certificates.

5.1. Prepare the Application Service

  1. (Optional) Delete the Service in the previous section by using the following command:

     $ kubectl delete -f service.yaml
  2. Create a Service file service-02.yaml with the following content. The app selector whoami matches the existing deployment, and the target port 8080 matches the container port in the previous step. Notice that this service is not a LoadBalancer type, and the name of this service is whoami-service.

     apiVersion: v1
     kind: Service
     metadata:
         name: whoami-service
     spec:
         selector:
             name: whoami
         ports:
             - name: http
               port: 80
               targetPort: 8080
  3. Run the command to create the service

     $ kubectl apply -f service-02.yaml

5.2. Install NGINX Ingress Controller

  1. Install NGINX Ingress Controller (ingress-nginx)

     $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
  2. Go to your Load Balancers dashboard and get the IP Address of the newly created Load Balancer. This is the Load Balancer created for the NGINX ingress.

  3. (Optional) Run the following command to wait for IP of the newly created Load Balancer. The IP is in the EXTERNAL-IP column.

     $ kubectl get services ingress-nginx-controller -n ingress-nginx -w
  4. Create an A record in your domain DNS that points to the above IP address.

  5. (Optional) Scale NGINX Ingress Controller to 03 replicas.

     $ kubectl scale deployment --namespace ingress-nginx ingress-nginx-controller --replicas=3 

5.3. Install Cert Manager

This section shows how to set up Cert Manager to use HTTP01 challenge solver to verify the ownership. If you want to use DNS01 challenge solver, see the article How to Automate DNS/TLS with External DNS and Let's Encrypt on Rcs Kubernetes Engine

  1. Install cert-manager to manage SSL certificates

     $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.10.1/cert-manager.yaml
  2. Create a manifest file letsencrypt.yaml to handle Let's Encrypt certificates. Replace with your actual email.

     apiVersion: cert-manager.io/v1
     kind: ClusterIssuer
     metadata:
       name: letsencrypt-staging
     spec:
       acme:
         # The ACME server URL
         server: https://acme-staging-v02.api.letsencrypt.org/directory
         preferredChain: "ISRG Root X1"
         # Email address used for ACME registration
         email: <YOUR_EMAIL>
         # Name of a secret used to store the ACME account private key
         privateKeySecretRef:
           name: letsencrypt-staging
         solvers:
           - http01:
               ingress:
                 class: nginx
     ---
     apiVersion: cert-manager.io/v1
     kind: ClusterIssuer
     metadata:
       name: letsencrypt-prod
     spec:
       acme:
         # The ACME server URL
         server: https://acme-v02.api.letsencrypt.org/directory
         # Email address used for ACME registration
         email: <YOUR_EMAIL>
         # Name of a secret used to store the ACME account private key
         privateKeySecretRef:
           name: letsencrypt-prod
         solvers:
           - http01:
               ingress:
                 class: nginx
  3. Run the command to install the above Let's Encrypt issuers.

     $ kubectl apply -f letsencrypt.yaml

5.4. Expose Application with Ingress

  1. Create an Ingress manifest file ingress.yaml with the following content. Replace with the domain that you have created A record in the above step. Replace whoami-service with your service name.

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       name: whoami-ingress
       annotations:
         kubernetes.io/ingress.class: nginx
         cert-manager.io/cluster-issuer: letsencrypt-prod
     spec:
       tls:
         - secretName: whoami-tls
           hosts:
             - <YOUR_DOMAIN>
       rules:
         - host: <YOUR_DOMAIN>
           http:
             paths:
               - path: /
                 pathType: Prefix
                 backend:
                   service:
                     name: whoami-service
                     port:
                       number: 80
  2. Run the command to create the ingress

     $ kubectl apply -f ingress.yaml
  3. Run the command kubectl get ingress to see the newly created ingress. The result should look like:

     NAME                      CLASS    HOSTS               ADDRESS        PORTS     AGE
     whoami-ingress   <none>   <YOUR_DOMAIN>      140.82.41.69   80, 443   37s
  4. Check the certificates

     $ kubectl get certificates
  5. Navigate to https://<YOUR_DOMAIN> to access your application.

6. Using Sticky Sessions

By default, a load balancer routes each request independently to a pool of servers based on the load-balancing algorithm. However, you can use the sticky session (also known as session affinity) feature to bind a user's session to a specific server.

In a Kubernetes environment, the sticky session feature helps keep the session from the client to a specific application pod. If the application pod is not available, the load balancer re-routes the requests to another application pod.

This section shows how to achieve sticky sessions using the NGINX Ingress Controller from the previous section.

You need to add the following annotations to the Ingress manifest file:

  • nginx.ingress.kubernetes.io/affinity: enable the Sticky Session. The value has to be cookie".
  • nginx.ingress.kubernetes.io/session-cookie-name: name of the cookie to track the instance for each request to each application pod.
  • nginx.ingress.kubernetes.io/session-cookie-max-age: time until the cookie expires in seconds
  • nginx.ingress.kubernetes.io/session-cookie-expires: a legacy version of the previous annotation for compatibility with old browsers.
  1. Change the ingress.yaml as follows:

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
         name: whoami-ingress
         annotations:
             kubernetes.io/ingress.class: nginx
             cert-manager.io/cluster-issuer: letsencrypt-prod
             nginx.ingress.kubernetes.io/affinity: "cookie"
             nginx.ingress.kubernetes.io/session-cookie-name: "sticky"
             nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
             nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
     spec:
         tls:
             - secretName: whoami-tls
               hosts:
                   - <YOUR_DOMAIN>
         rules:
             - host: <YOUR_DOMAIN>
               http:
                   paths:
                       - path: /
                         pathType: Prefix
                         backend:
                             service:
                                 name: whoami-service
                                 port:
                                     number: 80
  2. Apply the changes using kubectl

     $ kubectl apply -f ingress.yaml
  3. Confirm that the Ingress works

     $ kubectl describe ingress whoami-ingress
  4. Check if the server responds a Set-Cookie header

     $ curl -I https://<YOUR_DOMAIN>
  5. The result should look like:

     HTTP/2 200
     date: Tue, 24 May 2022 17:34:58 GMT
     content-type: text/html; charset=utf-8
     content-length: 372
     set-cookie: sticky=1653413699.542.140.602576|38fb12998d06bbfbeaeccec9bf71c761; Expires=Thu, 26-May-22 17:34:58 GMT; Max-Age=172800; Path=/; Secure; HttpOnly
     set-cookie: session=eyJteV9zZXNzaW9uIjoiVDBSQUsifQ.Yo0XQg.qaDgq6kq_P2gMC1vgqLPqN1KQfE; HttpOnly; Path=/
     vary: Cookie
     strict-transport-security: max-age=15724800; includeSubDomains

Notice that the response contains a set-cookie header with the sticky key. This cookie contains information about the upstream server. The NGINX Ingress Controller tries to route requests with the same cookie to the same application pod.

Refresh the website a few times. Notice that the Hostname doesn't change until the cookie expires, meaning the sticky session works as expected.

7. Using Proxy Protocol

Proxy Protocol is a network protocol for preserving a client's connection information (such as IP addresses) when the client's connection passes through a proxy. The ability to preserve the client information is essential to analyze the traffic logs or change the application functionality based on the geographic IP address.

This section shows how to set up a Rcs Load Balancer with Proxy Protocol to distribute traffic to the NGINX Ingress Controller and preserve client information.

Notice that you should set up Cert Manager to use the DNS01 challenge solver to issue TLS/SSL certificates and avoid problems with the HTTP01 challenge.

  1. Download the installation manifest of the NGINX Ingress Controller

     $ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
  2. Open the deploy.yaml with your favorite text editor

  3. Search for the text kind: ConfigMap and replace the content of the ConfigMap resource as follows. Replace the text 1.5.1 with your NGINX Ingress Controller version.

     apiVersion: v1
     data:
       allow-snippet-annotations: "true"
     kind: ConfigMap
     metadata:
       labels:
         app.kubernetes.io/component: controller
         app.kubernetes.io/instance: ingress-nginx
         app.kubernetes.io/name: ingress-nginx
         app.kubernetes.io/part-of: ingress-nginx
         app.kubernetes.io/version: 1.5.1
       name: ingress-nginx-controller
       namespace: ingress-nginx
     data:
       use-proxy-protocol: 'true'
       use-forwarded-headers: 'true'
       compute-full-forwarded-for: 'true'
       ssl-redirect: 'false'
  4. Search for the text type: LoadBalancer and replace the content of that Service as follows. Replace the text 1.5.1 with your NGINX Ingress Controller version.

     apiVersion: v1
     kind: Service
     metadata:
       labels:
         app.kubernetes.io/component: controller
         app.kubernetes.io/instance: ingress-nginx
         app.kubernetes.io/name: ingress-nginx
         app.kubernetes.io/part-of: ingress-nginx
         app.kubernetes.io/version: 1.5.1
       name: ingress-nginx-controller
       namespace: ingress-nginx
       annotations:
         service.beta.kubernetes.io/vultr-loadbalancer-proxy-protocol: 'true'
     spec:
       externalTrafficPolicy: Local
       ipFamilies:
       - IPv4
       ipFamilyPolicy: SingleStack
       ports:
       - appProtocol: http
         name: http
         port: 80
         protocol: TCP
         targetPort: 80
       - appProtocol: https
         name: https
         port: 443
         protocol: TCP
         targetPort: 443
       selector:
         app.kubernetes.io/component: controller
         app.kubernetes.io/instance: ingress-nginx
         app.kubernetes.io/name: ingress-nginx
       type: LoadBalancer
  5. Apply the changes using kubectl

     $ kubectl apply -f deploy.yaml

More Information

Introduction Rcs Load Balancer is a fully-managed solution to distribute traffic to groups of servers which decouples the availability of a backend service from the health of a single server. Rcs Load Balancer ensures your service stay online by distributing the load across multiple servers to ensure that servers don't get overloaded. If you are new to Rcs Load Balancers, you should read the Load Balancer Quickstart Guide first. Rcs Kubernetes Engine (VKE) is a fully-managed Kubernetes product. When deploying an application to VKE, Kubernetes automatically spreads Pods across different nodes in a cluster for better availability. Rcs Load Balancers are compatible with VKE to distribute traffic across multiple Pods in different nodes. Rcs Load Balancer in VKE offers all the same features and capabilities as the fully-managed solution for standalone scenarios. This guide explains how to deploy and configure Rcs Load Balancers in Rcs Kubernetes Engine (VKE) with detailed configuration information. Prerequisites Before you begin, you should: Deploy a Rcs Kubernetes Cluster with at least three nodes. Configure kubectl and git in your machine. Have a domain name if you want to follow the TLS/SSL certificates sections. 1. Deploy Web Servers This section shows how to deploy web servers to the Kubernetes cluster using a Deployment. The web server in this article is a Python web server that returns the hostname of the pod and HTTP request headers. This example application has a public Docker image (quanhua92/whoami) in the Docker Hub. You can go to this GitHub repository to see the application's source code. Create a file named deployment.yaml with the following content: apiVersion: apps/v1 kind: Deployment metadata: name: whoami spec: replicas: 3 selector: matchLabels: name: whoami template: metadata: labels: name: whoami spec: containers: - name: whoami image: quanhua92/whoami:latest imagePullPolicy: Always ports: - containerPort: 8080 Deploy the application using kubectl $ kubectl apply -f deployment.yaml Notice that the deployment name in this example is whoami, and the Pod listens to requests on the port 8080. 2. Deploy a Load Balancer for HTTP traffic This section shows how to deploy a load balancer for HTTP traffic on port 80. You deploy a Kubernetes Service with the LoadBalancer type and use metadata annotations to configure VKE Load Balancer. The default load-balancing algorithm is the Round Robin algorithm. This works by using each server behind the load balancer in turns. Create a file named service.yaml with the following content. The app selector whoami matches the existing deployment, and the target port 8080 matches the container port in the previous step. apiVersion: v1 kind: Service metadata: name: whoami-lb annotations: service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http" spec: type: LoadBalancer selector: name: whoami ports: - name: http port: 80 targetPort: 8080 Deploy the Service using kubectl $ kubectl apply -f service.yaml Run the following command to see the VKE Load Balancer setup progress: $ kubectl get service whoami-lb -w The result should look like: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE whoami-lb LoadBalancer 10.108.167.185 80:32365/TCP 9s whoami-lb LoadBalancer 10.108.167.185 139.180.143.107 80:32365/TCP 81s You can also go to the Load Balancer page in the Customer Portal to inspect your Load Balancers. You can navigate to the IP address of your Load Balancer to access the application. Notice that it may take a few minutes before you can access the application through the Load Balancer IP address. The response of the application should look like: Hostname: whoami-84798c47cd-2gnhd Host: 139.180.143.107 Cache-Control: max-age=0 Dnt: 1 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9,vi;q=0.8,la;q=0.7,nl;q=0.6 Cookie: session=eyJteV9zZXNzaW9uIjoiVDk4UVUifQ.YoqD3g.o1pyE6s6vTkQqnbvPhG08_6tvOI X-Forwarded-Proto: http X-Forwarded-For: 113.172.203.231 Connection: close Session: T98QU Refresh the website a few times. Notice that the Hostname changes after a few requests, meaning the Load Balancer can distribute the traffic across multiple pods. 3. Using the Least Connections Load Balancing Algorithm The least connections load balancing algorithm is a dynamic load balancing algorithm that distributes the client requests to the application server with the least number of active connections at the time the load balancer receives the client request. This algorithm works best in environments where the application servers have similar capabilities. Change the service.yaml as follows: apiVersion: v1 kind: Service metadata: name: whoami-lb annotations: service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http" service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections" spec: type: LoadBalancer selector: name: whoami ports: - name: http port: 80 targetPort: 8080 Deploy the Service using kubectl $ kubectl apply -f service.yaml 4. Configure Health Check on the VKE Load Balancer Rcs Load Balancers provides health checks to determine if the application servers respond to client requests. Here are some configurations that you can customize: Here are some configurations that you can customize: healthcheck-protocol: The protocol that load balancers use to perform health check. Two possible values are tcp and http. The default value is tcp healthcheck-path: The URL path that load balancers use to check on the application server. The default value is the root path, /. healthcheck-port: The port that load balancers use to check on the application server. The Kubernetes defines this value. You should not change this value in normal scenarios. healthcheck-check-interval: The interval between health checks in seconds. The default value is 15. healthcheck-response-timeout: The response timeout in seconds. The default value is 5. healthcheck-unhealthy-threshold: The number of unhealthy requests before the load balancer removes the application server from the server pool. The default value is 5. healthcheck-healthy-threshold: The number of healthy requests before load balancers adds the application server back to the server pool. The default value is 5. The example application in this article has an endpoint for the health check, /health. The benefit of using /health instead of / is that: You can reduce the computation required to run the health check. You can reduce the response time and content length. In the example application code, the endpoint returns an empty response with 200 status without any complex computation. Change the service.yaml as follows: apiVersion: v1 kind: Service metadata: name: whoami-lb annotations: service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http" service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-protocol: "http" service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-path: "/health" service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-check-interval: "10" service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-response-timeout: "5" service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-unhealthy-threshold: "5" service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-healthy-threshold: "5" spec: type: LoadBalancer selector: name: whoami ports: - name: http port: 80 targetPort: 8080 Deploy the Service using kubectl $ kubectl apply -f service.yaml 5. Expose the Application With Free TLS/SSL Certificates from Let's Encrypt This section shows how to deploy a load balancer for HTTPS traffic on port 443. Here are some approaches to obtaining TLS/SSL Certificates: Self-Signed Certificates: Use your own Certificate Authority to create and sign TLS/SSL certificates. This is a great option for development environments. Purchase TLS/SSL Certificates: You need to buy a TLS/SSL certificate from a well-known Certificate Authority for production use-cases. Use Free TLS/SSL Certificates: Use free TLS/SSL certificates from Let's Encrypt or ZeroSSL. In this section, you install NGINX Ingress Controller to handle incoming SSL/TLS traffic and Cert Manager to manage free TLS/SSL certificates from Let's Encrypt. NGINX Ingress Controller creates a LoadBalancer service to handle incoming traffic. This LoadBalancer service is also a VKE Load Balancer, so you don't need the service created in the previous sections. VKE Load Balancer routes incoming traffics to a pool of server nodes. Then, each server node routes the load into the NGINX Ingress Controllers. Each NGINX Ingress Controller routes the requests into the corresponding application pods. By default, there is only one NGINX Ingress Controller. You can scale the NGINX Ingress Controller depending on the traffic of your system. Cert Manager automates the creation and management of TLS/SSL certificates from various issuing sources, including Let's Encrypt, HashiCorp Vault, Venafi, and private public key infrastructure. You need a domain name to issue and manage free TLS/SSL Let's Encrypt certificates. 5.1. Prepare the Application Service (Optional) Delete the Service in the previous section by using the following command: $ kubectl delete -f service.yaml Create a Service file service-02.yaml with the following content. The app selector whoami matches the existing deployment, and the target port 8080 matches the container port in the previous step. Notice that this service is not a LoadBalancer type, and the name of this service is whoami-service. apiVersion: v1 kind: Service metadata: name: whoami-service spec: selector: name: whoami ports: - name: http port: 80 targetPort: 8080 Run the command to create the service $ kubectl apply -f service-02.yaml 5.2. Install NGINX Ingress Controller Install NGINX Ingress Controller (ingress-nginx) $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml Go to your Load Balancers dashboard and get the IP Address of the newly created Load Balancer. This is the Load Balancer created for the NGINX ingress. (Optional) Run the following command to wait for IP of the newly created Load Balancer. The IP is in the EXTERNAL-IP column. $ kubectl get services ingress-nginx-controller -n ingress-nginx -w Create an A record in your domain DNS that points to the above IP address. (Optional) Scale NGINX Ingress Controller to 03 replicas. $ kubectl scale deployment --namespace ingress-nginx ingress-nginx-controller --replicas=3 5.3. Install Cert Manager This section shows how to set up Cert Manager to use HTTP01 challenge solver to verify the ownership. If you want to use DNS01 challenge solver, see the article How to Automate DNS/TLS with External DNS and Let's Encrypt on Rcs Kubernetes Engine Install cert-manager to manage SSL certificates $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.10.1/cert-manager.yaml Create a manifest file letsencrypt.yaml to handle Let's Encrypt certificates. Replace with your actual email. apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: # The ACME server URL server: https://acme-staging-v02.api.letsencrypt.org/directory preferredChain: "ISRG Root X1" # Email address used for ACME registration email: # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-staging solvers: - http01: ingress: class: nginx --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod solvers: - http01: ingress: class: nginx Run the command to install the above Let's Encrypt issuers. $ kubectl apply -f letsencrypt.yaml 5.4. Expose Application with Ingress Create an Ingress manifest file ingress.yaml with the following content. Replace with the domain that you have created A record in the above step. Replace whoami-service with your service name. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whoami-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod spec: tls: - secretName: whoami-tls hosts: - rules: - host: http: paths: - path: / pathType: Prefix backend: service: name: whoami-service port: number: 80 Run the command to create the ingress $ kubectl apply -f ingress.yaml Run the command kubectl get ingress to see the newly created ingress. The result should look like: NAME CLASS HOSTS ADDRESS PORTS AGE whoami-ingress 140.82.41.69 80, 443 37s Check the certificates $ kubectl get certificates Navigate to https:// to access your application. 6. Using Sticky Sessions By default, a load balancer routes each request independently to a pool of servers based on the load-balancing algorithm. However, you can use the sticky session (also known as session affinity) feature to bind a user's session to a specific server. In a Kubernetes environment, the sticky session feature helps keep the session from the client to a specific application pod. If the application pod is not available, the load balancer re-routes the requests to another application pod. This section shows how to achieve sticky sessions using the NGINX Ingress Controller from the previous section. You need to add the following annotations to the Ingress manifest file: nginx.ingress.kubernetes.io/affinity: enable the Sticky Session. The value has to be cookie". nginx.ingress.kubernetes.io/session-cookie-name: name of the cookie to track the instance for each request to each application pod. nginx.ingress.kubernetes.io/session-cookie-max-age: time until the cookie expires in seconds nginx.ingress.kubernetes.io/session-cookie-expires: a legacy version of the previous annotation for compatibility with old browsers. Change the ingress.yaml as follows: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whoami-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "sticky" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" spec: tls: - secretName: whoami-tls hosts: - rules: - host: http: paths: - path: / pathType: Prefix backend: service: name: whoami-service port: number: 80 Apply the changes using kubectl $ kubectl apply -f ingress.yaml Confirm that the Ingress works $ kubectl describe ingress whoami-ingress Check if the server responds a Set-Cookie header $ curl -I https:// The result should look like: HTTP/2 200 date: Tue, 24 May 2022 17:34:58 GMT content-type: text/html; charset=utf-8 content-length: 372 set-cookie: sticky=1653413699.542.140.602576|38fb12998d06bbfbeaeccec9bf71c761; Expires=Thu, 26-May-22 17:34:58 GMT; Max-Age=172800; Path=/; Secure; HttpOnly set-cookie: session=eyJteV9zZXNzaW9uIjoiVDBSQUsifQ.Yo0XQg.qaDgq6kq_P2gMC1vgqLPqN1KQfE; HttpOnly; Path=/ vary: Cookie strict-transport-security: max-age=15724800; includeSubDomains Notice that the response contains a set-cookie header with the sticky key. This cookie contains information about the upstream server. The NGINX Ingress Controller tries to route requests with the same cookie to the same application pod. Refresh the website a few times. Notice that the Hostname doesn't change until the cookie expires, meaning the sticky session works as expected. 7. Using Proxy Protocol Proxy Protocol is a network protocol for preserving a client's connection information (such as IP addresses) when the client's connection passes through a proxy. The ability to preserve the client information is essential to analyze the traffic logs or change the application functionality based on the geographic IP address. This section shows how to set up a Rcs Load Balancer with Proxy Protocol to distribute traffic to the NGINX Ingress Controller and preserve client information. Notice that you should set up Cert Manager to use the DNS01 challenge solver to issue TLS/SSL certificates and avoid problems with the HTTP01 challenge. Download the installation manifest of the NGINX Ingress Controller $ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml Open the deploy.yaml with your favorite text editor Search for the text kind: ConfigMap and replace the content of the ConfigMap resource as follows. Replace the text 1.5.1 with your NGINX Ingress Controller version. apiVersion: v1 data: allow-snippet-annotations: "true" kind: ConfigMap metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 name: ingress-nginx-controller namespace: ingress-nginx data: use-proxy-protocol: 'true' use-forwarded-headers: 'true' compute-full-forwarded-for: 'true' ssl-redirect: 'false' Search for the text type: LoadBalancer and replace the content of that Service as follows. Replace the text 1.5.1 with your NGINX Ingress Controller version. apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 name: ingress-nginx-controller namespace: ingress-nginx annotations: service.beta.kubernetes.io/vultr-loadbalancer-proxy-protocol: 'true' spec: externalTrafficPolicy: Local ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - appProtocol: http name: http port: 80 protocol: TCP targetPort: 80 - appProtocol: https name: https port: 443 protocol: TCP targetPort: 443 selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx type: LoadBalancer Apply the changes using kubectl $ kubectl apply -f deploy.yaml More Information Load Balancer Quickstart Guide VKE Load Balancers NGINX Ingress Examples

Was this answer helpful?
Back

Powered by WHMCompleteSolution