A guide how to restart Kubernetes pods with kubectl

Alex KondratievAlex Kondratiev

5 min read

Even in well-designed Kubernetes environments, you may occasionally need to restart pods due to factors such as a cluster outage, a pod failure, or configuration changes.


Unfortunately, in Kubernetes (unlike, for example, Docker), there is no direct restart command for pods. The kubectl CLI does not provide a built-in command like kubectl restart pod or kubectl restart deployment. However, there are several workarounds for restarting pods using kubectl.

What are some reasons for restarting a pod?

Although restarting a Kubernetes pod is not a standard operation, it can be helpful in the following situations:

  • Applying configuration changes. If the pod’s configuration (environment variables, image versions, etc.) has been changed, a manual restart may be required for the changes to take effect.
  • Recovery. If a container in the pod has failed, restarting the pod may be necessary to restore the application’s functionality.
  • Pod termination. If the pod was terminated due to an Out Of Memory (OOM) error, it will require a restart after changes to the resource specifications are made.
  • Releasing resources. A pod may consume excessive memory or CPU resources, leading to performance issues or impacting other workflows. Restarting the pod helps release those resources and resolve or mitigate the problems.

There are several standard methods for restarting pods using combinations of kubectl commands.

Note. After the restart, the pod name may change. You can view the current list of pods using the command kubectl get pods.

Rollout Restart

This command triggers a rolling update for all pods in a deployment. Kubernetes sequentially terminates the old pods, one at a time, creating a new one for each terminated pod with the updated configuration.

shell

1kubectl rollout restart deployment <deployment_name> -n <namespace>

Deleting the Pod

If you manually delete a pod, Kubernetes will automatically create a new one based on the corresponding configuration. This happens because Kubernetes operates declaratively, automatically replacing any missing object (pod).

shell

1kubectl delete pod <pod_name> -n <namespace>
Note. If you need to restart a large number of pods, you can delete all pods with a specific label: kubectl delete pod -l "app:myapp" -n <namespace>

Alternatively, you can delete a replica set:

shell

1kubectl delete replicaset <name> -n <namespace>

Scaling Replicas

You can also restart pods using the scale command.

First, you need to set the number of replicas to zero. Kubernetes will terminate all existing replicas.

shell

1kubectl scale deployment <deployment name> -n <namespace> --replicas=0

Then, you can create the desired number of replicas, and Kubernetes will create new pods.

shell

1kubectl scale deployment <eployment name> -n <namespace> -- replicas=<new_replicacount>
Note. This method may cause application downtime while the new pods are being created.

Changing Environment Variables

You can also restart a pod by changing its associated environment variables with the kubectl set env command. Kubernetes will automatically restart the pod for the changes to take effect.

shell

1kubectl set env deployment <deployment name> -n <namespace> <env name>=<new value>

Conclusions

There are a few widespread methods for restarting Kubernetes pods. So, it is possible to choose the most appropriate method based on the required speed of pod restart and the acceptable downtime for the application.

The recommended approach is to use kubectl rollout restart, as it performs a controlled restart of pods without causing application downtime.
Alex Kondratiev

Alex Kondratiev

Founder of ITsyndicate. DevOps Enthusiast with 15+ years of experience in cloud, Infrastructure as Code, Kubernetes, and automation. Specialized in architecting secure, scalable, and resilient systems.

Plan the present.
Build the future.