Knowledgebase

Mastering Kubernetes Ingress with Examples on a VKE cluster Print

  • 0

Introduction

Ingress is a Kubernetes API object that specifies and manages traffic routing rules for services inside a cluster. By using Ingress in a cluster, the process of traffic management goes through routing rules, and this removes the need to expose each service or create multiple load balancers.

While Ingress and load balancers are similar in functionality, they have a few distinct differences. Load balancers expose services to the internet, which is also an Ingress feature. But an Ingress Object includes a collection of rules. Ingress uses these rules and forwards all incoming traffic to a matching service which in return sends the request to a pod that can handle the request. A Kubernetes ingress object can offer multiple services such as load balancing, name-based virtual hosting, and TLS/SSL termination.

Kubernetes Ingress consists of the Ingress API object that specifies the applicable state for exposing services to inbound/incoming traffic, and the Ingress Controller. An Ingress Controller reads and processes the Ingress resource information and usually runs as a pod within the cluster.

This guide explains how you can implement Kubernetes Ingress processes to create rules that match traffic patterns within a cluster. You are to apply a Kubernetes Ingress Controller to act as a gateway to cluster services and securely handle client requests.

Prerequisites

Before you begin:

On the server:

Kubernetes Ingress Controller Choices

To install a Kubernetes Ingress Controller on a VKE Cluster, you need to choose an implementation method based on your cluster services. Among the best choices, the Nginx Ingress Controller, Istio, Traefik, and HAProxy Ingress allow you to implement Ingress rules to control your services that route requests to the respective pods.

Nginx Ingress controller is feature-rich, highly customizable, and supports a wide range of Ingress resource annotations including SSL termination and path-based routing. The Istio Gateway Ingress controller commonly integrates with the Istio service mesh for advanced traffic management, security, and telemetry with good observability features. Likewise HAProxy Ingress is a controller that uses HAProxy as the underlying load balancer. It offers high performance and low latency features that make it suitable for demanding workloads with support for layer 7 routing using custom configurations.

In this guide, implement the Nginx Ingress Controller on your VKE cluster as it offers more flexibility and features to manage your Ingress resources.

Ingress Resources

Ingress resources on their own have no functionality outside of the Ingress controller, the controller handles the actual implementation of the rules specified by the resource. An Ingress resource is a native Kubernetes resource in which DNS routing rules map external traffic to internal Kubernetes service endpoints.

Ingress resources update the routing configurations within the Ingress controller, and below is an example of a resource file:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: basic-ingress
    annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /
spec:
    ingressClassName: nginx-demo
    rules:
        - host: demo.example.com
          http:
              paths:
                  - path: /demo-path
                    pathType: Prefix
                    backend:
                        service:
                            name: demo
                            port:
                                number: 8080

Below is what the Ingress resource declarations represent:

  • The apiVersion, kind, metadata, and spec fields are mandatory fields required by the Ingress resource. spec contains the information needed to configure a proxy server or load balancer, it contains the list of rules to match against each incoming request. The Ingress resource only supports rules for directing HTTP traffic, rule must include the following information:

  • host: Defines the hostname to apply the set rules. When a host such as demo.example.com is defined, the rules apply to that host. If no host is defined, the rules apply to all inbound HTTP traffic from any cluster IP address or domain

  • paths: Sets the incoming request endpoint associated with the backend. For example, /demo-path sets the URL request to demo.example.com/demo-path. The backend is defined with a service.name, service.port.number, and service.port.name. The host and path must match the contents of the incoming request before the Ingress controller directs traffic to the referenced service

  • pathType: Specify how a path should be matched and processed. It defines how the resource paths are interpreted and used when routing requests. Below are the supported types:

    • Prefix: Compares the path prefix split by /. For example, in the above resource, - the path /demo-path matches any derived path such as /demo-path/test1, /demo-path/test1/test2 when the pathtype is set to prefix
    • Exact: Matches the exact URL path. For example, the path /demo-path would only work for that path, any requests to other derived paths like /demo-path/ or /demo-path/test would not return any result
    • ImplementationSpecific: Grant the Ingress Controller determination privileges on how to manage requests to the path.
  • backend: Defines the backend Kubernetes Service and port name. Requests that match the host and path rules are sent to the defined backend resource for processing

Requests that do not match any of the resource rules are handled by the Ingress controller depending on the controller specifications.

Install a Kubernetes Ingress Controller to a VKE Cluster

You can install a Kubernetes Ingress controller of your choice depending on your cluster services. In this section, install the Nginx Ingress Controller to work as a gateway to your VKE cluster applications as described below.

  1. Create a new Nginx Ingress Controller namespace

     $ kubectl create namespace ingress-nginx
  2. Using Helm, install the Nginx Ingress Controller to your cluster in a new ingress-nginx namespace

     $ helm upgrade --install ingress-nginx ingress-nginx \
       --repo https://kubernetes.github.io/ingress-nginx \
       --namespace ingress-nginx
  3. Wait for at least 3 minutes for the Ingress controller to deploy on your cluster. Then, verify that the Nginx Ingress Controller Pod is running

     $ kubectl get pods -n ingress-nginx

    Output:

     NAME                                               READY    STATUS    RESTARTS   AGE
     ingress-nginx-controller-5fcb5746fc-95smj          1/1      Running   0          10m
  4. View the external IP of your Ingress controller and point a domain record to the address

     $ kubectl get services -n ingress-nginx 

    Output:

     NAME                                   TYPE              CLUSTER-IP        EXTERNAL-IP       PORT(S)                        AGE
     ingress-nginx-controller               LoadBalancer      10.109.53.1       192.0.2.100     80:32300/TCP,443:31539/TCP     132m
     ingress-nginx-controller-admission     ClusterIP         10.105.248.187    <none>            443/TCP                        132m

    As displayed in the above output, point your domain record to the load balancer external IP 192.0.2.100

Deploy a Sample Application

To test your installed Kubernetes Nginx Ingress Controller, deploy a sample application to your cluster as described below.

  1. Create a new namespace dev to separate the cluster services

     $ kubectl create namespace dev
  2. Using a text editor such as Nano, create a new hello-app.yaml deployment manifest

     $ nano hello-app.yaml
  3. Add the following contents to the file

     apiVersion: apps/v1
     kind: Deployment
     metadata:
         name: hello-app
         namespace: dev
     spec:
         selector:
             matchLabels:
                 app: hello
         replicas: 3
         template:
             metadata:
                 labels:
                     app: hello
             spec:
                 containers:
                 - name: hello
                     image: "gcr.io/google-samples/hello-app:2.0"

    Save and close the file

  4. Apply the new deployment to your cluster

     $ kubectl create -f hello-app.yaml
  5. View the deployment status and verify that it's available and ready

     $ kubectl get deployments -n dev

    Output:

     NAME         READY   UP-TO-DATE   AVAILABLE   AGE
     hello-app    3/3     3            3           70s
  6. Create a new service hello-app-svc.yaml file

     $ nano hello-app-svc.yaml
  7. Add the following contents to the file

     apiVersion: v1
     kind: Service
     metadata:
         name: hello-svc
         namespace: dev
         labels:
             app: hello
     spec:
         type: ClusterIP
         selector:
             app: hello
         ports:
         -  port: 80
             targetPort: 8080
             protocol: TCP

    Save and close the file

  8. Create the service

     $ kubectl create -f hello-app-svc.yaml
  9. Verify that the service is available in the dev namespace

     $ kubectl get services -n dev

    Output:

     NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
     hello-svc    ClusterIP   10.100.32.138   <none>        80/TCP    66s

Define an Ingress Resource

  1. Create a new ingress resource file ingress-demo.yaml

     $ nano ingress-demo.yaml
  2. Add the following contents to the file. Replace example.com with your actual domain

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
         name: ingress-demo
         namespace: dev
         annotations:
             nginx.ingress.kubernetes.io/rewrite-target: /
     spec:
         ingressClassName: nginx
         rules:
             - host: example.com
             http:
                 paths:
                     - path: /hello
                         pathType: Prefix
                         backend:
                             service:
                                 name: hello-svc
                                 port:
                                     number: 80

    Save and close the file

    The above YAML file defines a new ingress-demo Ingress resource in the dev namespace. All matching requests from the example.com/hello URL are forwarded to the hello-svc cluster resource

  3. Deploy the Ingress resource

     $ kubectl create -f ingress-demo.yaml 
  4. Verify that the Ingress resource is available in your dev cluster namespace

     $ kubectl get ingress -n dev 
  5. Using Curl, send a test request to your configured Ingress URL and verify that the linked service accepts your request

     $ curl http://example.com/hello

    Output:

     Hello, world!
     Version: 2.0.0
     Hostname: hello-app-5fb487d974-4rgsr

    As returned by the URL request, the Ingress controller is correctly routing external traffic to the configured service resource

Connect Multiple Services to a Single Ingress Resource

Depending on your cluster resources, you can connect multiple services to a single ingress resource file. This allows you to configure a single domain with multiple path definitions. In this section, create new cluster services and define multiple paths to a single domain in an ingress resource as described in the steps below.

  1. Create a new foo.yaml service definition file

     $ nano foo.yaml
  2. Add the following contents to the file

     kind: Pod
     apiVersion: v1
     metadata:
       name: foo-app
       labels:
         app: foo
     spec:
       containers:
         - name: foo-app
           image: 'kicbase/echo-server:1.0'
     ---
     kind: Service
     apiVersion: v1
     metadata:
       name: foo-service
     spec:
       selector:
         app: foo
       ports:
         - port: 8080

    Save and close the file.

    The above resource definition file creates an echo-server service that runs on port 8080 jut like to the sample application you deployed earlier.

  3. Apply the resource configuration to your cluster

     $ kubectl create -f foo.yaml 
  4. Create another deployment file bar.yaml

     $ nano bar.yaml
  5. Add the following contents to the file

     kind: Pod
     apiVersion: v1
     metadata:
       name: bar-app
       labels:
         app: bar
     spec:
       containers:
           - name: bar-app
             image: 'kicbase/echo-server:1.0'
     ---
     kind: Service
     apiVersion: v1
     metadata:
       name: bar-service
     spec:
       selector:
         app: bar
       ports:
         - port: 8080

    Save and close the file.

  6. Apply the resource to your cluster

     $ kubectl create -f bar.yaml 
  7. Verify that the foo-service and bar-service cluster services are available

     $ kubectl get services

    Output:

     NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
     bar-service   ClusterIP   10.109.10.196   <none>        8080/TCP   10h
     foo-service   ClusterIP   10.108.38.206   <none>        8080/TCP   10h
     kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP    11h
  8. Verify that the respective service pods are available and running

     $ kubectl get pods

    Output:

     NAME      READY   STATUS    RESTARTS   AGE
     bar-app   1/1     Running   0          10h
     foo-app   1/1     Running   0          10h

    As displayed in the cluster resources output, you have deployed two additional pods and services to the default namespace which is different from the dev namespace you applied earlier.

  9. To expose the Kubernetes services through a single host definition, create a new Ingress resource foo-bar-ingress.yaml file

     $ nano foo-bar-ingress.yaml
  10. Add the following contents to the file

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       name: ingress-foo-bar-demo
       annotations:
         nginx.ingress.kubernetes.io/rewrite-target: /
     spec:
       ingressClassName: nginx
       rules:
         - host: example.com
           http:
             paths:
             - pathType: Prefix
               path: /foo
               backend:
                 service:
                   name: foo-service
                   port:
                     number: 8080
             - pathType: Prefix
               path: /bar
               backend:
                 service:
                   name: bar-service
                   port:
                     number: 8080

    Save and close the file.

    The above ingress resource defines rules that direct incoming traffic from the example.com host URL to the foo-service and bar-service cluster resources. The controller handles requests to each of the services based on the defined path

  11. Deploy the Ingress resource

     $ kubectl create -f foo-bar-ingress.yaml 
  12. Verify that the Ingress resource is available in your cluster

     $ kubectl get ingress

    Output:

     NAME                   CLASS   HOSTS                 ADDRESS         PORTS   AGE
     ingress-foo-bar-demo   nginx   example.com           192.0.2.100      80     10h   
  13. Using Curl, test access to the /foo path

     $ curl http://example.com/foo

    Output:

     Request served by foo-app
    
     HTTP/1.1 GET /foo
    
     Host: example.com
     Accept: */*
     ...

    As displayed in the above output, requests to the /foo path are successfully handled by the foo-service

  14. Test access to the /bar path

     $ curl http://example.com/bar

    Output:

     Request served by bar-app
    
     HTTP/1.1 GET /bar
    
     Host: example.com
     Accept: */*
     ...

    As displayed in the above output, requests to the /bar path are successfully handled by the bar-app

You have implemented an Ingress resource that directs traffic from one host to multiple paths with different services. You can configure multiple resource definitions with different paths to forward external requests to cluster services.

Connect Your Services Across Different Namespaces

In production environments, it's recommended to separate cluster services in different namespaces. This isolates services and only resources within the same namespace can communicate together. When creating Ingress resources, you have to separate resources per namespace to reach the services. However, it's also possible to create a single Ingress resource file that spans across different cluster namespaces to route traffic to the respective services.

Using the Kubernetes ExternalNameService definition, you can bridge services from different namespaces within a single Ingress resource. For example, configure an Ingress resource that uses services from the dev namespace while referencing the deployed foo-service and bar-service in the default namespace as described below.

To create this bridge to the services in the default namespace from your dev namespace, \

  1. Create a new foo-bar-hello-bridge.yaml service definition file

     $ nano foo-bar-hello-bridge.yaml
  2. Add the following contents to the file

     apiVersion: v1
     kind: Service
     metadata:
         name: hello-foo-bridge
         namespace: dev
     spec:
         type: ExternalName
         externalName: foo-service.default
     ---
     apiVersion: v1
     kind: Service
     metadata:
         name: hello-bar-bridge
         namespace: dev
     spec:
         type: ExternalName
         externalName: bar-service.default

    Save and close the file.

    In the above configuration, the .spec.externalName definition creates a bridge to the linked service. bar-service.default points to the bar-service resource, and foo-service.default points to the foo-service resource from the default namespace.

    The .metadata.namespace definition deploys the default namespace bridge service to the dev namespace

  3. Deploy the bridge service to your cluster

     $ kubectl create -f foo-bar-hello-bridge.yaml 
  4. Verify that the bridge services are available in the dev namespace

     $ kubectl get services -n dev 

    Output:

     NAME               TYPE           CLUSTER-IP      EXTERNAL-IP            PORT(S)   AGE
     hello-bar-bridge   ExternalName   <none>          foo-service.default    <none>    10h
     hello-foo-bridge   ExternalName   <none>          bar-service.default    <none>    10h
     hello-svc          ClusterIP      10.102.14.114   <none>                 80/TCP    11h

    As displayed in the above output, you can use hello-foo-bridge and hello-bar-bridge as backends for Ingress resources deployed in the dev namespace. The services point to the foo-service and bar-service resources respectively in the default namespace.

  5. To combine services in the default namespace with the dev namespace, create a new Ingress resource file single-ingress.yaml

     $ nano single-ingress.yaml
  6. Add the following contents to the file

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
     name: single-ingress
     namespace: dev
     annotations:
         nginx.ingress.kubernetes.io/rewrite-target: /
     spec:
     ingressClassName: nginx
     rules:
         - host: example.com
         http:
             paths:
             - path: /hello
                 pathType: Prefix
                 backend:
                 service:
                     name: hello-svc
                     port:
                     number: 80
             - path: /dev-foo
                 pathType: Prefix
                 backend:
                 service:
                     name: hello-foo-bridge
                     port:
                     number: 8080
             - path: /dev-bar
                 pathType: Prefix
                 backend:
                 service:
                     name: hello-bar-bridge
                     port:
                     number: 8080

    Save and close the file.

  7. To avoid conflicting rules on the /hello path, delete the previous ingress-demo resource in the dev namespace

     $ kubectl delete ingress ingress-demo -n dev 
  8. Deploy the new Ingress resource to your cluster

     $ kubectl create -f single-ingress.yaml 
  9. Using Curl, test access to the hello-svc in the dev namespace using the /hello path

     $ curl http://example.com/hello

    Output:

     Hello, world!
     Version: 2.0.0
     ...
  10. Test access to the foo service in the default namespace

     $ curl http://example.com/dev-foo

    Output:

     Request served by foo-app
    
     HTTP/1.1 GET /
    
     Host: example.com
     Accept: */*
     ...
  11. Test access to the bar service in the default namespace

     $ curl http://example.com/dev-bar

    Output:

     Request served by bar-app
    
     HTTP/1.1 GET /
    
     Host: example.com
     Accept: */*
     ...

    As displayed in the above outputs, the Ingress resource is correctly bridging services between the dev and default namespaces using the bridge services you created earlier.

Apply TLS/SSL Termination to Accept HTTPS Connections

TLS/SSL termination refers to the process of decrypting encrypted network traffic secured using the TLS/SSL protocols. This happens at a designated point within your network infrastructure such as a load balancer, server, or reverse proxy such as the Ingress controller. To apply TLS/SSL termination to your Ingress resources, follow the steps below to generate self-signed SSL certificates to apply to your cluster.

  1. Using the Openssl utility, generate a new self-signed certificate. Replace example.com with your actual domain name and desired certificate details

     $ openssl req -x509 \
         -sha256 -days 356 \
         -nodes \
         -newkey rsa:2048 \
         -subj "/CN=example.com/C=US/L=San Fransisco" \
         -keyout root.key -out root.crt

    The above command generates a new certificate file root.crt, and a corresponding private key file root.key in your working directory.

  2. Create a new Kubernetes secret to upload your self-signed SSL certificate credentials to the cluster

     $ kubectl create secret tls hello-app-tls \
             --namespace dev \
             --key root.key \
             --cert root.crt
  3. Create a new ingress-tls.yaml Ingress resource file

     $ nano ingress-tls.yaml
  4. Add the following contents to the file. Verify that the .spec.tls field and the .secretName match your Kubernetes Secret details

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       name: ingress-tls-demo
       namespace: dev
       annotations:
         nginx.ingress.kubernetes.io/rewrite-target: /
     spec:
       ingressClassName: nginx
       tls:
         - hosts:
             - example.com
           secretName: hello-app-tls
       rules:
         - host: example.com
           http:
             paths:
               - path: /hello-tls
                 pathType: Prefix
                 backend:
                   service:
                     name: hello-svc
                     port:
                       number: 80

    Save and close the file.

    This above Ingress resource sets a new path /hello-tls with TLS termination that points to the hello-svc service

  5. Apply the new Ingress resource to your cluster

     $ kubectl apply -f ingress-tls.yaml
  6. Send an HTTPS request to the /hello-tls path on your configured host domain and verify that the cluster service responds to your request

     $ curl -kv https://example.com/hello-tls

    Output:

     *   Trying controller_ip:443...
     * Connected to controller_ip (controller_ip) port 443
     * ALPN: curl offers h2,http/1.1
     * TLSv1.3 (OUT), TLS handshake, Client hello (1):
     ...
     * TLSv1.3 (OUT), TLS handshake, Finished (20):
     * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
     * ALPN: server accepted h2
     * Server certificate:
     *  subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
     *  start date: Oct  4 16:25:10 2023 GMT
     *  expire date: Oct  3 16:25:10 2024 GMT
     *  issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
     *  SSL certificate verify result: self-signed certificate (18), continuing anyway.
     * using HTTP/2
     * [HTTP/2] [1] OPENED stream for https://example.com/hello-tls
     * [HTTP/2] [1] [:method: GET]
     ...
     ...
     Hello, world!
     Version: 2.0.0
     Hostname: hello-app-5fb487d974-4rgsr
     * Connection #0 to host example.com left intact

    From the output, you can see the TLS handshake process from start to finish, with the hello-svc successfully responding to the request. This shows that TLS termination is successful.

To apply trusted Let's Encrypt SSL certificates to your cluster, visit how to Set up Nginx Ingress Controller with SSL on VKE

Troubleshooting Ingress Controller Errors

When configuring your Ingress controller, you may encounter errors you need to investigate and resolve to have the controller running correctly. Below is how you can find and troubleshoot common Ingress controller errors.

View Ingress Controller Logs

  1. Verify the Ingress controller pod status

     $ kubectl get pods -n ingress-nginx 

    Output:

     NAME                                        READY   STATUS    RESTARTS   AGE
     ingress-nginx-controller-c5c658699-65fk4   1/1     Running   0          41m
  2. View the Ingress controller pod logs to identify the potential source of an error

     $ kubectl logs ingress-nginx-controller-<xxxxx>-<xxxxxx> -n ingress-nginx 

    Examine the log entries and find the source of application errors that affect your controller performance

Verify DNS Records

When using a host such as example.com, verify that the domain resolves to your Ingress controller Load Balancer IP Address. Improper DNS resolution can lead to timeout errors when trying to access defined paths. Verify your domain address record using nslookup as described below.

    $ nslookup www.example.com
  
Your output should look like the one below:

    Server: ...
    ...
    Non-authoritative answer:
    ...
    Address: 192.0.2.100

Error 404

  1. View your Ingress resource

  2. Describe the target Ingress resource and verify the configurations in the Rules: section

     $ kubectl describe ingress single-ingress -n dev

    Output:

     Name:             single-ingress
     Labels:           <none>
     Namespace:        dev
     Address:          192.0.2.100
     Ingress Class:    nginx
     Default backend:  <default>
     Rules:
       Host                 Path  Backends
       ----                 ----  --------
       example.com 
                           /hello     hello-svc:80 (10.244.161.195:8080,10.244.54.196:8080,10.244.91.198:8080)
                           /dev-foo   hello-foo-bridge:8080 (<error: endpoints "hello-foo-bridge" not found>)
                           /dev-bar   hello-bar-bridge:8080 (<error: endpoints "hello-bar-bridge" not found>)
     Annotations:           nginx.ingress.kubernetes.io/rewrite-target: /
     Events:                <none>

    In the above output, the Ingress resource includes the /hello, /dev-foo, and /dev-bar that point to the backend services. To fix the 404 error, verify that your target paths point to the correct service and port. If not, edit your Ingress resource manifest to change the paths and service definitions.

Conclusion

In this guide, you implemented Kubernetes Ingress operations to correctly route and handle external requests in a VKE cluster. Depending on your cluster services, you can deploy an Ingress Controller to route incoming traffic to multiple services across all cluster namespaces. For more information about the Nginx Ingress Controller applied in this guide, visit the official documentation.

Next Steps

To implement more solutions in your Rcs Kubernetes Engine (VKE) cluster, visit the following resources:

Introduction Ingress is a Kubernetes API object that specifies and manages traffic routing rules for services inside a cluster. By using Ingress in a cluster, the process of traffic management goes through routing rules, and this removes the need to expose each service or create multiple load balancers. While Ingress and load balancers are similar in functionality, they have a few distinct differences. Load balancers expose services to the internet, which is also an Ingress feature. But an Ingress Object includes a collection of rules. Ingress uses these rules and forwards all incoming traffic to a matching service which in return sends the request to a pod that can handle the request. A Kubernetes ingress object can offer multiple services such as load balancing, name-based virtual hosting, and TLS/SSL termination. Kubernetes Ingress consists of the Ingress API object that specifies the applicable state for exposing services to inbound/incoming traffic, and the Ingress Controller. An Ingress Controller reads and processes the Ingress resource information and usually runs as a pod within the cluster. This guide explains how you can implement Kubernetes Ingress processes to create rules that match traffic patterns within a cluster. You are to apply a Kubernetes Ingress Controller to act as a gateway to cluster services and securely handle client requests. Prerequisites Before you begin: Deploy a Rcs Kubernetes Engine (VKE) cluster Deploy a Ubuntu server to work as a management machine to test the Ingress operations Using SSH, access the server as a non-root user with sudo privileges On the server: Install and configure Kubectl to access the cluster Install the Helm Package Manager Kubernetes Ingress Controller Choices To install a Kubernetes Ingress Controller on a VKE Cluster, you need to choose an implementation method based on your cluster services. Among the best choices, the Nginx Ingress Controller, Istio, Traefik, and HAProxy Ingress allow you to implement Ingress rules to control your services that route requests to the respective pods. Nginx Ingress controller is feature-rich, highly customizable, and supports a wide range of Ingress resource annotations including SSL termination and path-based routing. The Istio Gateway Ingress controller commonly integrates with the Istio service mesh for advanced traffic management, security, and telemetry with good observability features. Likewise HAProxy Ingress is a controller that uses HAProxy as the underlying load balancer. It offers high performance and low latency features that make it suitable for demanding workloads with support for layer 7 routing using custom configurations. In this guide, implement the Nginx Ingress Controller on your VKE cluster as it offers more flexibility and features to manage your Ingress resources. Ingress Resources Ingress resources on their own have no functionality outside of the Ingress controller, the controller handles the actual implementation of the rules specified by the resource. An Ingress resource is a native Kubernetes resource in which DNS routing rules map external traffic to internal Kubernetes service endpoints. Ingress resources update the routing configurations within the Ingress controller, and below is an example of a resource file: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: basic-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx-demo rules: - host: demo.example.com http: paths: - path: /demo-path pathType: Prefix backend: service: name: demo port: number: 8080 Below is what the Ingress resource declarations represent: The apiVersion, kind, metadata, and spec fields are mandatory fields required by the Ingress resource. spec contains the information needed to configure a proxy server or load balancer, it contains the list of rules to match against each incoming request. The Ingress resource only supports rules for directing HTTP traffic, rule must include the following information: host: Defines the hostname to apply the set rules. When a host such as demo.example.com is defined, the rules apply to that host. If no host is defined, the rules apply to all inbound HTTP traffic from any cluster IP address or domain paths: Sets the incoming request endpoint associated with the backend. For example, /demo-path sets the URL request to demo.example.com/demo-path. The backend is defined with a service.name, service.port.number, and service.port.name. The host and path must match the contents of the incoming request before the Ingress controller directs traffic to the referenced service pathType: Specify how a path should be matched and processed. It defines how the resource paths are interpreted and used when routing requests. Below are the supported types: Prefix: Compares the path prefix split by /. For example, in the above resource, - the path /demo-path matches any derived path such as /demo-path/test1, /demo-path/test1/test2 when the pathtype is set to prefix Exact: Matches the exact URL path. For example, the path /demo-path would only work for that path, any requests to other derived paths like /demo-path/ or /demo-path/test would not return any result ImplementationSpecific: Grant the Ingress Controller determination privileges on how to manage requests to the path. backend: Defines the backend Kubernetes Service and port name. Requests that match the host and path rules are sent to the defined backend resource for processing Requests that do not match any of the resource rules are handled by the Ingress controller depending on the controller specifications. Install a Kubernetes Ingress Controller to a VKE Cluster You can install a Kubernetes Ingress controller of your choice depending on your cluster services. In this section, install the Nginx Ingress Controller to work as a gateway to your VKE cluster applications as described below. Create a new Nginx Ingress Controller namespace $ kubectl create namespace ingress-nginx Using Helm, install the Nginx Ingress Controller to your cluster in a new ingress-nginx namespace $ helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx Wait for at least 3 minutes for the Ingress controller to deploy on your cluster. Then, verify that the Nginx Ingress Controller Pod is running $ kubectl get pods -n ingress-nginx Output: NAME READY STATUS RESTARTS AGE ingress-nginx-controller-5fcb5746fc-95smj 1/1 Running 0 10m View the external IP of your Ingress controller and point a domain record to the address $ kubectl get services -n ingress-nginx Output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.109.53.1 192.0.2.100 80:32300/TCP,443:31539/TCP 132m ingress-nginx-controller-admission ClusterIP 10.105.248.187 443/TCP 132m As displayed in the above output, point your domain record to the load balancer external IP 192.0.2.100 Deploy a Sample Application To test your installed Kubernetes Nginx Ingress Controller, deploy a sample application to your cluster as described below. Create a new namespace dev to separate the cluster services $ kubectl create namespace dev Using a text editor such as Nano, create a new hello-app.yaml deployment manifest $ nano hello-app.yaml Add the following contents to the file apiVersion: apps/v1 kind: Deployment metadata: name: hello-app namespace: dev spec: selector: matchLabels: app: hello replicas: 3 template: metadata: labels: app: hello spec: containers: - name: hello image: "gcr.io/google-samples/hello-app:2.0" Save and close the file Apply the new deployment to your cluster $ kubectl create -f hello-app.yaml View the deployment status and verify that it's available and ready $ kubectl get deployments -n dev Output: NAME READY UP-TO-DATE AVAILABLE AGE hello-app 3/3 3 3 70s Create a new service hello-app-svc.yaml file $ nano hello-app-svc.yaml Add the following contents to the file apiVersion: v1 kind: Service metadata: name: hello-svc namespace: dev labels: app: hello spec: type: ClusterIP selector: app: hello ports: - port: 80 targetPort: 8080 protocol: TCP Save and close the file Create the service $ kubectl create -f hello-app-svc.yaml Verify that the service is available in the dev namespace $ kubectl get services -n dev Output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc ClusterIP 10.100.32.138 80/TCP 66s Define an Ingress Resource Create a new ingress resource file ingress-demo.yaml $ nano ingress-demo.yaml Add the following contents to the file. Replace example.com with your actual domain apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-demo namespace: dev annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: example.com http: paths: - path: /hello pathType: Prefix backend: service: name: hello-svc port: number: 80 Save and close the file The above YAML file defines a new ingress-demo Ingress resource in the dev namespace. All matching requests from the example.com/hello URL are forwarded to the hello-svc cluster resource Deploy the Ingress resource $ kubectl create -f ingress-demo.yaml Verify that the Ingress resource is available in your dev cluster namespace $ kubectl get ingress -n dev Using Curl, send a test request to your configured Ingress URL and verify that the linked service accepts your request $ curl http://example.com/hello Output: Hello, world! Version: 2.0.0 Hostname: hello-app-5fb487d974-4rgsr As returned by the URL request, the Ingress controller is correctly routing external traffic to the configured service resource Connect Multiple Services to a Single Ingress Resource Depending on your cluster resources, you can connect multiple services to a single ingress resource file. This allows you to configure a single domain with multiple path definitions. In this section, create new cluster services and define multiple paths to a single domain in an ingress resource as described in the steps below. Create a new foo.yaml service definition file $ nano foo.yaml Add the following contents to the file kind: Pod apiVersion: v1 metadata: name: foo-app labels: app: foo spec: containers: - name: foo-app image: 'kicbase/echo-server:1.0' --- kind: Service apiVersion: v1 metadata: name: foo-service spec: selector: app: foo ports: - port: 8080 Save and close the file. The above resource definition file creates an echo-server service that runs on port 8080 jut like to the sample application you deployed earlier. Apply the resource configuration to your cluster $ kubectl create -f foo.yaml Create another deployment file bar.yaml $ nano bar.yaml Add the following contents to the file kind: Pod apiVersion: v1 metadata: name: bar-app labels: app: bar spec: containers: - name: bar-app image: 'kicbase/echo-server:1.0' --- kind: Service apiVersion: v1 metadata: name: bar-service spec: selector: app: bar ports: - port: 8080 Save and close the file. Apply the resource to your cluster $ kubectl create -f bar.yaml Verify that the foo-service and bar-service cluster services are available $ kubectl get services Output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bar-service ClusterIP 10.109.10.196 8080/TCP 10h foo-service ClusterIP 10.108.38.206 8080/TCP 10h kubernetes ClusterIP 10.96.0.1 443/TCP 11h Verify that the respective service pods are available and running $ kubectl get pods Output: NAME READY STATUS RESTARTS AGE bar-app 1/1 Running 0 10h foo-app 1/1 Running 0 10h As displayed in the cluster resources output, you have deployed two additional pods and services to the default namespace which is different from the dev namespace you applied earlier. To expose the Kubernetes services through a single host definition, create a new Ingress resource foo-bar-ingress.yaml file $ nano foo-bar-ingress.yaml Add the following contents to the file apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-foo-bar-demo annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: example.com http: paths: - pathType: Prefix path: /foo backend: service: name: foo-service port: number: 8080 - pathType: Prefix path: /bar backend: service: name: bar-service port: number: 8080 Save and close the file. The above ingress resource defines rules that direct incoming traffic from the example.com host URL to the foo-service and bar-service cluster resources. The controller handles requests to each of the services based on the defined path Deploy the Ingress resource $ kubectl create -f foo-bar-ingress.yaml Verify that the Ingress resource is available in your cluster $ kubectl get ingress Output: NAME CLASS HOSTS ADDRESS PORTS AGE ingress-foo-bar-demo nginx example.com 192.0.2.100 80 10h Using Curl, test access to the /foo path $ curl http://example.com/foo Output: Request served by foo-app HTTP/1.1 GET /foo Host: example.com Accept: */* ... As displayed in the above output, requests to the /foo path are successfully handled by the foo-service Test access to the /bar path $ curl http://example.com/bar Output: Request served by bar-app HTTP/1.1 GET /bar Host: example.com Accept: */* ... As displayed in the above output, requests to the /bar path are successfully handled by the bar-app You have implemented an Ingress resource that directs traffic from one host to multiple paths with different services. You can configure multiple resource definitions with different paths to forward external requests to cluster services. Connect Your Services Across Different Namespaces In production environments, it's recommended to separate cluster services in different namespaces. This isolates services and only resources within the same namespace can communicate together. When creating Ingress resources, you have to separate resources per namespace to reach the services. However, it's also possible to create a single Ingress resource file that spans across different cluster namespaces to route traffic to the respective services. Using the Kubernetes ExternalNameService definition, you can bridge services from different namespaces within a single Ingress resource. For example, configure an Ingress resource that uses services from the dev namespace while referencing the deployed foo-service and bar-service in the default namespace as described below. To create this bridge to the services in the default namespace from your dev namespace, \ Create a new foo-bar-hello-bridge.yaml service definition file $ nano foo-bar-hello-bridge.yaml Add the following contents to the file apiVersion: v1 kind: Service metadata: name: hello-foo-bridge namespace: dev spec: type: ExternalName externalName: foo-service.default --- apiVersion: v1 kind: Service metadata: name: hello-bar-bridge namespace: dev spec: type: ExternalName externalName: bar-service.default Save and close the file. In the above configuration, the .spec.externalName definition creates a bridge to the linked service. bar-service.default points to the bar-service resource, and foo-service.default points to the foo-service resource from the default namespace. The .metadata.namespace definition deploys the default namespace bridge service to the dev namespace Deploy the bridge service to your cluster $ kubectl create -f foo-bar-hello-bridge.yaml Verify that the bridge services are available in the dev namespace $ kubectl get services -n dev Output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-bar-bridge ExternalName foo-service.default 10h hello-foo-bridge ExternalName bar-service.default 10h hello-svc ClusterIP 10.102.14.114 80/TCP 11h As displayed in the above output, you can use hello-foo-bridge and hello-bar-bridge as backends for Ingress resources deployed in the dev namespace. The services point to the foo-service and bar-service resources respectively in the default namespace. To combine services in the default namespace with the dev namespace, create a new Ingress resource file single-ingress.yaml $ nano single-ingress.yaml Add the following contents to the file apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: single-ingress namespace: dev annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: example.com http: paths: - path: /hello pathType: Prefix backend: service: name: hello-svc port: number: 80 - path: /dev-foo pathType: Prefix backend: service: name: hello-foo-bridge port: number: 8080 - path: /dev-bar pathType: Prefix backend: service: name: hello-bar-bridge port: number: 8080 Save and close the file. To avoid conflicting rules on the /hello path, delete the previous ingress-demo resource in the dev namespace $ kubectl delete ingress ingress-demo -n dev Deploy the new Ingress resource to your cluster $ kubectl create -f single-ingress.yaml Using Curl, test access to the hello-svc in the dev namespace using the /hello path $ curl http://example.com/hello Output: Hello, world! Version: 2.0.0 ... Test access to the foo service in the default namespace $ curl http://example.com/dev-foo Output: Request served by foo-app HTTP/1.1 GET / Host: example.com Accept: */* ... Test access to the bar service in the default namespace $ curl http://example.com/dev-bar Output: Request served by bar-app HTTP/1.1 GET / Host: example.com Accept: */* ... As displayed in the above outputs, the Ingress resource is correctly bridging services between the dev and default namespaces using the bridge services you created earlier. Apply TLS/SSL Termination to Accept HTTPS Connections TLS/SSL termination refers to the process of decrypting encrypted network traffic secured using the TLS/SSL protocols. This happens at a designated point within your network infrastructure such as a load balancer, server, or reverse proxy such as the Ingress controller. To apply TLS/SSL termination to your Ingress resources, follow the steps below to generate self-signed SSL certificates to apply to your cluster. Using the Openssl utility, generate a new self-signed certificate. Replace example.com with your actual domain name and desired certificate details $ openssl req -x509 \ -sha256 -days 356 \ -nodes \ -newkey rsa:2048 \ -subj "/CN=example.com/C=US/L=San Fransisco" \ -keyout root.key -out root.crt The above command generates a new certificate file root.crt, and a corresponding private key file root.key in your working directory. Create a new Kubernetes secret to upload your self-signed SSL certificate credentials to the cluster $ kubectl create secret tls hello-app-tls \ --namespace dev \ --key root.key \ --cert root.crt Create a new ingress-tls.yaml Ingress resource file $ nano ingress-tls.yaml Add the following contents to the file. Verify that the .spec.tls field and the .secretName match your Kubernetes Secret details apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-tls-demo namespace: dev annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - example.com secretName: hello-app-tls rules: - host: example.com http: paths: - path: /hello-tls pathType: Prefix backend: service: name: hello-svc port: number: 80 Save and close the file. This above Ingress resource sets a new path /hello-tls with TLS termination that points to the hello-svc service Apply the new Ingress resource to your cluster $ kubectl apply -f ingress-tls.yaml Send an HTTPS request to the /hello-tls path on your configured host domain and verify that the cluster service responds to your request $ curl -kv https://example.com/hello-tls Output: * Trying controller_ip:443... * Connected to controller_ip (controller_ip) port 443 * ALPN: curl offers h2,http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): ... * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN: server accepted h2 * Server certificate: * subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate * start date: Oct 4 16:25:10 2023 GMT * expire date: Oct 3 16:25:10 2024 GMT * issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate * SSL certificate verify result: self-signed certificate (18), continuing anyway. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://example.com/hello-tls * [HTTP/2] [1] [:method: GET] ... ... Hello, world! Version: 2.0.0 Hostname: hello-app-5fb487d974-4rgsr * Connection #0 to host example.com left intact From the output, you can see the TLS handshake process from start to finish, with the hello-svc successfully responding to the request. This shows that TLS termination is successful. To apply trusted Let's Encrypt SSL certificates to your cluster, visit how to Set up Nginx Ingress Controller with SSL on VKE Troubleshooting Ingress Controller Errors When configuring your Ingress controller, you may encounter errors you need to investigate and resolve to have the controller running correctly. Below is how you can find and troubleshoot common Ingress controller errors. View Ingress Controller Logs Verify the Ingress controller pod status $ kubectl get pods -n ingress-nginx Output: NAME READY STATUS RESTARTS AGE ingress-nginx-controller-c5c658699-65fk4 1/1 Running 0 41m View the Ingress controller pod logs to identify the potential source of an error $ kubectl logs ingress-nginx-controller-- -n ingress-nginx Examine the log entries and find the source of application errors that affect your controller performance Verify DNS Records When using a host such as example.com, verify that the domain resolves to your Ingress controller Load Balancer IP Address. Improper DNS resolution can lead to timeout errors when trying to access defined paths. Verify your domain address record using nslookup as described below. $ nslookup www.example.com Your output should look like the one below: Server: ... ... Non-authoritative answer: ... Address: 192.0.2.100 Error 404 View your Ingress resource Describe the target Ingress resource and verify the configurations in the Rules: section $ kubectl describe ingress single-ingress -n dev Output: Name: single-ingress Labels: Namespace: dev Address: 192.0.2.100 Ingress Class: nginx Default backend: Rules: Host Path Backends ---- ---- -------- example.com /hello hello-svc:80 (10.244.161.195:8080,10.244.54.196:8080,10.244.91.198:8080) /dev-foo hello-foo-bridge:8080 () /dev-bar hello-bar-bridge:8080 () Annotations: nginx.ingress.kubernetes.io/rewrite-target: / Events: In the above output, the Ingress resource includes the /hello, /dev-foo, and /dev-bar that point to the backend services. To fix the 404 error, verify that your target paths point to the correct service and port. If not, edit your Ingress resource manifest to change the paths and service definitions. Conclusion In this guide, you implemented Kubernetes Ingress operations to correctly route and handle external requests in a VKE cluster. Depending on your cluster services, you can deploy an Ingress Controller to route incoming traffic to multiple services across all cluster namespaces. For more information about the Nginx Ingress Controller applied in this guide, visit the official documentation. Next Steps To implement more solutions in your Rcs Kubernetes Engine (VKE) cluster, visit the following resources: How to Improve VKE Cluster Observability and Security Using Kubespy How to Secure a VKE Cluster Using Traefik, Cert-Manager and Let's Encrypt How to Install MySQL on Rcs Kubernetes Engine (VKE)

Was this answer helpful?
Back

Powered by WHMCompleteSolution