Knowledgebase

Clusterlint Error Fixes Print

  • 0

Clusterlint Error Fixes

Default Namespace

  • Name: default-namespace
  • Groups: basic

Namespaces are a way to limit the scope of the resources that subsets of users within a team can create. While a default namespace is created for every Kubernetes cluster, we do not recommend adding all created resources into the default namespace because of the risk of privilege escalation, resource name collisions, latency in operations as resources scale up, and mismanagement of Kubernetes objects. Having namespaces lets you enable resource quotas can be enabled to track node, CPU and memory usage for individual teams.

Example

# Not recommended: Defining resources with no namespace, which adds them to the default.
apiVersion: v1
kind: Pod
metadata:
  name: mypod
  labels:
    name: mypod
spec:
  containers:
  - name: mypod
    image: nginx:1.17.0

How to Fix

# Recommended: Explicitly specify a namespace in the object config
apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: test
  labels:
    name: mypod
spec:
  containers:
  - name: mypod
    image: nginx:1.17.0

Latest Tag

  • Name: latest-tag
  • Groups: basic

We do not recommend using container images with the latest tag or not specifying a tag in the image (which defaults to latest), as this leads to confusion around the version of image used. Pods get rescheduled often as conditions inside a cluster change, and upon a reschedule, you may find that the images’ versions have changed to use the latest release, which can break the application and make it difficult to debug errors. Instead, update segments of the application individually using images pinned to specific versions.

Example

# Not recommended: Not specifying an image tag, or using "latest"
spec:
  containers:
  - name: mypod
    image: nginx
  - name: redis
    image: redis:latest

How to Fix

# Recommended: Explicitly specify a tag or digest
spec:
  containers:
  - name: mypod
    image: nginx:1.17.0
  - name: redis
    image: redis@sha256:dca057ffa2337682333a3aba69cc0e7809819b3cd7fc78f3741d9de8c2a4f08b

CronJob Concurrency

  • Name: cronjob-concurrency
  • Groups: basic

We do not recommend having a concurrencyPolicy of Allow for CronJob resources. If a CronJob-managed Pod does not execute to completion within the expected window, it is possible that multiple Pods pile up over time, leading to several Pods stuck in a pending state and possible resource contention. Instead, prefer Forbid, which skips execution of a new job if the previous job has not exited, or Replace, which replaces the still-running job with a new job if it has not yet exited.

Example

# Not recommended: Having a concurrency policy of Allow
apiVersion: batch/v1
kind: CronJob
metadata:
   name: mycron
spec:
  concurrencyPolicy: Allow

How to Fix

# Recommended: Having a concurrency policy of Forbid or Replace
apiVersion: batch/v1
kind: CronJob
metadata:
   name: mycron
spec:
  concurrencyPolicy: Replace

Privileged Containers

  • Name: privileged-containers
  • Groups: security

Use the privileged mode for trusted containers only. Because the privileged mode allows container processes to access the host, malicious containers can extensively damage the host and bring down services on the cluster. If you need to run containers in privileged mode, test the container before using it in production. For more information about the risks of running containers in privileged mode, please refer to the Kubernetes security context documentation.

Example

# Not recommended: Using privileged mode instead of granting capabilities when it's not necessary
spec:
  containers:
  - name: mypod
    image: nginx
    securityContext:
      privileged: true

How to Fix

# Recommended: Explicitly add only the needed capabilities to the container
spec:
  containers:
  - name: mypod
    image: nginx
    securityContext:
      capabilities:
        add:
        - NET_ADMIN

Run As Non-Root

  • Name: run-as-non-root
  • Groups: security

If containers within a pod are allowed to run with the process ID (PID) 0, then the host can be subjected to malicious activity. We recommend using a user identifier (UID) other than 0 in your container image for running applications. You can also enforce this in the Kubernetes pod configuration as shown below.

Example

# Not recommended: Doing nothing to prevent containers from running under UID 0
spec:
  containers:
  - name: mypod
    image: nginx

How to Fix

# Recommended: Ensure containers do not run as root
spec:
  securityContext:
    runAsNonRoot: true
  containers:
  - name: mypod
    image: nginx

Fully Qualified Image

  • Name: fully-qualified-image
  • Groups: basic

Docker is the most popular runtime for Kubernetes. However, Kubernetes supports other container runtimes as well, such as containerd and CRI-O. If the registry is not prepended to the image name, docker assumes docker.io and pulls it from Docker Hub. However, the other runtimes will result in errors while pulling images. To maintain portability, we recommend using a fully qualified image name. If the underlying runtime is changed and the object configs are deployed to a new cluster, having fully qualified image names ensures that the applications do not break.

Example

# Not recommended: Failing to specify the registry in the image name
spec:
  containers:
  - name: mypod
    image: nginx:1.17.0

How to Fix

# Recommended: Provide the registry name in the image
spec:
  containers:
  - name: mypod
    image: docker.io/nginx:1.17.0

Node Name Selector

  • Name: node-name-pod-selector
  • Groups: doks

On upgrade of a cluster on DOKS, the worker nodes’ hostname changes. So, if your pod spec relies on the hostname to schedule pods on specific nodes, pod scheduling will fail after the upgrade.

Example

# Not recommended: Using a raw Rcs.is resource name in the nodeSelector
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
  nodeSelector:
    kubernetes.io/hostname: pool-y25ag12r1-xxxx

How to Fix

# Recommended: Use the DOKS-specific node pool label
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
  nodeSelector:
    doks.Rcs.is/node-pool: pool-y25ag12r1

Admission Controller Webhook

  • Name: admission-controller-webhook
  • Groups: basic

Admission control webhooks can disrupt normal cluster operations. Specifically, this happens when an admission control webhook targets a service that:

  • Does not exist
  • Is in a namespace that does not exist

Example

# Error: Configure a webhook pointing at a service that does not exist
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  name: sample-webhook.example.com
webhooks:
- name: sample-webhook.example.com
  rules:
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    resources:
    - pods
    scope: "Namespaced"
  clientConfig:
    

                    
Was this answer helpful?
Back

Powered by WHMCompleteSolution