Delivering and deploying safe cloud applications regularly needs more skill and dedication. Unfortunately, not every developer has the prowess to ensure that applications are deployed safely. Fortunately, there is an organization called the Center of Internet Security (CIS) that publishes security best practices for deploying cloud applications safely. These best practices are created by industry experts.
To make deployment security easier, a tool called Kube-bench uses CIS best practices to check if your deployments have been deployed safely. It will give you a list of all the necessary steps needed to ensure that your deployments are secure. In addition, It will also give warnings such as:
Ensure that the
--encryption-provider-config
argument is set as appropriate (Manual)Ensure that encryption providers are appropriately configured (Manual)
Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
Kube-bench is free and open-sourced. In this tutorial, you will learn how to add Kube-bench to your Kubernetes cluster and learn how to ensure that deployments are executed safely using Kube-bench.
Prerequisites
You need Kubectl and a running Kubernetes cluster. In addition, make sure that you have created at least one deployment that will be used to test Kube-bench.
What Vulnerabilities Are Associated With Deployments?
Vulnerabilities caused by incorrect configuration, such as using insecure protocols or leaving default passwords in place, can result in a deployment being vulnerable. Below are examples of vulnerabilities your deployments may encounter.
Network vulnerabilities: Network flaws such as unpatched servers, weak network passwords, and unencrypted communication channels, occur inside the network infrastructure that facilitates the application deployment. Communication between containers is frequently required, either inside the same host or between hosts. This communication could potentially be intercepted by an attacker if it is not adequately secured. An attacker can intercept and read the data being communicated, for instance, if two containers are communicating via unencrypted communication routes. When talking across containers, it's crucial to employ secure communication methods like TLS to avoid this.
Containers that have privileged flags enabled: Applications can be packaged and isolated using containers, which enables them to function reliably in many contexts. They add a layer of protection by restricting an application's access to the host system. However, if a container is started with privileged flags, it can bypass security checks and possibly access the resources of the host system. In case of a breach cyber attackers will have the potential to steal more resources from the host machine through the privileged container.
Using insecure images: Utilizing dependable and secure images is crucial when installing containers. An image can put the container at risk if it has known security flaws or has not been appropriately protected. For instance, if a known vulnerability in an image has not been patched, an attacker may be able to take advantage of it.
Kubernetes and container misconfiguration: A well-liked container orchestration technology called Kubernetes enables businesses to manage and deploy their containerized applications at scale. However, if Kubernetes is not set up correctly, security vulnerabilities may be introduced. For instance, if a hacker is able to access the Kubernetes API server, they can be able to compromise the cluster's whole suite of containers. To prevent unwanted access, it is crucial to appropriately secure the Kubernetes API server as well as any other platform components.
Containerization poses a number of potential security issues, including the use of insecure images, operating containers with privileged flags, misconfiguring container orchestration systems like Kubernetes, and unsecured communication between containers. It is crucial to adequately secure containers and the systems on which they run, as well as to utilize reputable and secure images and communication routes, in order to reduce these dangers. In the next section, you will learn ways you can protect your deployment infrastructure.
What is Deployment Safety?
Deployment safety involves ensuring that deployment configuration and deployment environment does not pose any security threat or risk. The demand for deployment security has increased along with the popularity of Kubernetes. The following are some top recommendations for protecting your Kubernetes deployments:
- Enable network policies to manage how Kubernetes endpoints and pod-group communications are handled. By doing so, you will be reducing the attack surface of your deployment and limit the likelihood of accidental data leaks. Kubernetes network policies do a great job of stating who has permission to access certain components and resources.
- Utilize Role-Based Access Control (RBAC): RBAC gives you the ability to specify granular permissions for users and processes within a Kubernetes cluster. By doing this, the potential for privilege escalation can be reduced and unapproved access to resources can be avoided.
- Use resource limits and requests to limit how much CPU and memory a pod can use. By doing this, you may prevent resource depletion attacks and make sure your deployment has enough resources to function properly. Shortage of resources leads to Denial of service attacks because the application will not be able to accomplish all its tasks and services.
- Pod security standards can be used to impose security-related limitations on pods, such as who is allowed to run the pods and what privileges they are allowed to have. By doing this, the likelihood of malicious pods being deployed can be reduced, and privilege escalation can be avoided.
- Turn on auditing to keep track of and record all API requests made to your Kubernetes cluster. This will help you identify and investigate any unusual or questionable behavior. Reading logs will also help you to detect vulnerabilities before they are exploited by cyber attackers.
- Use encrypted communication: Whenever possible, use encrypted communication to protect yourself against eavesdropping cyberattackers. For component-to-component communication, TLS should be used.
- Verify that your container images are secure by using the
docker scan
command which will detect all vulnerabilities associated with your images. The output will also include remedial solutions and it will ask you to update all images that are old and not up to date. After updating your images make sure that your containers are updated using the latest image. - Use namespaces to logically group resources in your Kubernetes cluster by using namespaces. This can help to reduce resource congestion and the possibility of unintended data leakage between different environments or teams.
- Scanning deployment manifests before applying them to the cluster is very crucial. This will prevent you from applying manifests that have misconfigurations and errors. Kubernetes is a secure platform so most vulnerabilities erect from developers' mistakes which are misconfigurations. Here is a list of** crucial** components that you have to make sure that they are configured well:
- seLinuxOptions
- runAsNonRoot
- procMount
- AppArmor
- seccompProfile
- readOnlyRootFilesystem
- AllowPrivilegeEscalation
- Keep your cluster updated: It's essential to keep your Kubernetes cluster and all of its components current in order to take advantage of the most recent security patches and improvements. This requires updating the cluster version as well as your container images, dependencies, and other components in your production environment.
There are many steps you can take to ensure the security of your Kubernetes deployments. By following these best practices, you can ensure the security and compliance of your deployment while reducing the likelihood of data breaches and other security issues.
Adding Kube-bench to Your Kubernetes Cluster
Kube-bench runs as a pod inside your cluster which is created by a Job. So, create a YAML file called job.yaml and add the following contents. The YAML file will create the Kube-bench Job when applied.
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
spec:
template:
metadata:
labels:
app: kube-bench
spec:
hostPID: true
containers:
- name: kube-bench
image: docker.io/aquasec/kube-bench:v0.6.10
command: ["kube-bench"]
volumeMounts:
- name: var-lib-etcd
mountPath: /var/lib/etcd
readOnly: true
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
readOnly: true
- name: var-lib-kube-scheduler
mountPath: /var/lib/kube-scheduler
readOnly: true
- name: var-lib-kube-controller-manager
mountPath: /var/lib/kube-controller-manager
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd
readOnly: true
- name: lib-systemd
mountPath: /lib/systemd/
readOnly: true
- name: srv-kubernetes
mountPath: /srv/kubernetes/
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
# /usr/local/mount-from-host/bin is mounted to access kubectl / kubelet, for auto-detecting the Kubernetes version.
# You can omit this mount if you specify --version as part of the command.
- name: usr-bin
mountPath: /usr/local/mount-from-host/bin
readOnly: true
- name: etc-cni-netd
mountPath: /etc/cni/net.d/
readOnly: true
- name: opt-cni-bin
mountPath: /opt/cni/bin/
readOnly: true
restartPolicy: Never
volumes:
- name: var-lib-etcd
hostPath:
path: "/var/lib/etcd"
- name: var-lib-kubelet
hostPath:
path: "/var/lib/kubelet"
- name: var-lib-kube-scheduler
hostPath:
path: "/var/lib/kube-scheduler"
- name: var-lib-kube-controller-manager
hostPath:
path: "/var/lib/kube-controller-manager"
- name: etc-systemd
hostPath:
path: "/etc/systemd"
- name: lib-systemd
hostPath:
path: "/lib/systemd"
- name: srv-kubernetes
hostPath:
path: "/srv/kubernetes"
- name: etc-kubernetes
hostPath:
path: "/etc/kubernetes"
- name: usr-bin
hostPath:
path: "/usr/bin"
- name: etc-cni-netd
hostPath:
path: "/etc/cni/net.d/"
- name: opt-cni-bin
hostPath:
path: "/opt/cni/bin/"
Use the following command to create Kube-bench Job:
$ kubectl apply -f job.yaml
To check if the Job has created the pod successfully, get all pods using the following command:
$ kubectl get pods
The pod will show the completed status after a few seconds.
NAME READY STATUS RESTARTS AGE
kube-bench-fx9jm 0/1 Completed 0 5m28s
How to Check if Deployments Have Been Executed Safely Using Kube-bench
After you have successfully added Kube-bench to your cluster, it will automatically start scanning your deployments. Use the following command to fetch deployment safety report made by Kube-bench:
$ kubectl logs kube-bench-nwwz6
You will get the following output that shows you a list of tasks that you should do to ensure that your deployments are safe.
[INFO] 1.3 Controller Manager
[WARN] 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
[FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated)
[PASS] 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
[PASS] 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
The scan output will have a section called the Remediations master” that shows you what to do to solve a specific detected problem:
1.1.12 On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
At the end of the report you will find the report summary:
== Summary master ==
38 checks PASS
11 checks FAIL
13 checks WARN
0 checks INFO
Why Should Kubernetes Deployments Secured at Cost
The fact that Kubernetes deployments frequently contain sensitive data and information is a key factor in their need for security. A Kubernetes cluster, for instance, might hold private data, login passwords, or sensitive corporate information. The company may face major repercussions if this data were to end up in the wrong hands. Additionally, Kubernetes deployments frequently give users access to the underlying infrastructure, such as cloud resources or virtual machines. This means that an attacker might be able to access and compromise these resources if there is a security flaw in a Kubernetes deployment.
The fact that Kubernetes installations are frequently used to manage a large number of containers and microservices is another reason to secure them. As a result, they are a desirable target for attackers wishing to conduct an extensive attack. A malicious actor might try to perform a distributed denial of service (DDoS) assault against a Kubernetes deployment, for instance, in order to obstruct business operations or steal data. Additionally, it may be challenging to recognize and address security concerns due to the dynamic nature of containerized apps and the size of a Kubernetes deployment.
In conclusion, protecting against potential security threats and breaches requires securing Kubernetes deployments. It is crucial for businesses to adopt best practices and security measures because these deployments frequently contain sensitive data and extensive infrastructure. Organizations may contribute to ensuring the safety and security of their apps and data by taking proactive measures to secure their Kubernetes deployments.
Learn More
To learn more about Kube-bench, see the project documentation.