How to schedule pods to restart using Kubernetes CronJob
2 min readMay 12, 2020
This is a filthy way to restart pods but sometimes we need to do filthy things.
As usual, this post will be short and useful ( i guess),
you required, some Kubernetes kinds:
- ServiceAccount, for set permissions to CronJob
- Role, to set verbs which you CronJob can use it
- RoleBinding, to create a relationship between role and ServiceAccount
- CronJob to restart your pod
# Create file cron-job.yaml
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: deleting-pods
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deleting-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "patch", "list", "watch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deleting-pods
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: deleting-pods
subjects:
- kind: ServiceAccount
name: deleting-pods
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deleting-pod
namespace: my-namespace
spec:
concurrencyPolicy: Forbid
schedule: "0 */1 * * *" # At minute 0 past every hour, chage it.
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: deleting-pods
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command:
- 'kubectl'
- 'delete'
- 'pod'
- 'my-pod'
- '--force'
- '--grace-period=0'# kubectl apply -f cron-job.yaml
# kubectl get cronjob -n my-namespace
So, in my case, I have a statefulset but if you have a kind deployment, check the following link is more focused on kind deployment.
Cheers :)