我有一个k8s cronjob,它由一个初始化容器和一个pod容器组成.如果init容器失败,则主容器中的Pod将永远不会启动,并且会无限期地停留在"PodInitializing"中.
I've got a k8s cronjob that consists of an init container and a one pod container. If the init container fails, the Pod in the main container never gets started, and stays in "PodInitializing" indefinitely.
我的意图是如果init容器失败,则作业会失败.
My intent is for the job to fail if the init container fails.
--- apiVersion: batch/v1beta1 kind: CronJob metadata: name: job-name namespace: default labels: run: job-name spec: schedule: "15 23 * * *" startingDeadlineSeconds: 60 concurrencyPolicy: "Forbid" successfulJobsHistoryLimit: 30 failedJobsHistoryLimit: 10 jobTemplate: spec: # only try twice backoffLimit: 2 activeDeadlineSeconds: 60 template: spec: initContainers: - name: init-name image: init-image:1.0 restartPolicy: Never containers: - name: some-name image: someimage:1.0 restartPolicy: Never卡在豆荚上的kubectl会导致:
a kubectl on the pod that's stuck results in:
Name: job-name-1542237120-rgvzl Namespace: default Priority: 0 PriorityClassName: <none> Node: my-node-98afffbf-0psc/10.0.0.0 Start Time: Wed, 14 Nov 2018 23:12:16 +0000 Labels: controller-uid=ID job-name=job-name-1542237120 Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container elasticsearch-metrics; cpu request for init container elasticsearch-repo-setup; cpu requ... Status: Failed IP: 10.0.0.0 Controlled By: Job/job-1542237120 Init Containers: init-container-name: Container ID: docker://ID Image: init-image:1.0 Image ID: init-imageID Port: <none> Host Port: <none> State: Terminated Reason: Error Exit Code: 1 Started: Wed, 14 Nov 2018 23:12:21 +0000 Finished: Wed, 14 Nov 2018 23:12:32 +0000 Ready: False Restart Count: 0 Requests: cpu: 100m Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro) Containers: some-name: Container ID: Image: someimage:1.0 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 100m Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True推荐答案
我认为您可能会错过它是初始化容器的预期行为. 规则是,如果initContainers失败,并且将restartPolicy设置为Never,则Pod不会重新启动,否则Kubernetes会一直重启它,直到成功为止.
I think that you could miss that it is the expected behavior of the init containers. The rule is that in case of initContainers failure a Pod will not restart if restartPolicy is set to Never otherwise the Kubernetes will keep restarting it until it succeeds.
也:
如果init容器失败,则主容器中的Pod永远不会 开始,并无限期停留在"PodInitializing"中.
If the init container fails, the Pod in the main container never gets started, and stays in "PodInitializing" indefinitely.
根据文档:
在所有初始化容器都成功之前,Pod不能准备就绪.这 初始化容器上的端口未在服务下聚合.豆荚 正在初始化的状态为Pending,但应该有一个 条件初始化设置为true.
A Pod cannot be Ready until all Init Containers have succeeded. The ports on an Init Container are not aggregated under a service. A Pod that is initializing is in the Pending state but should have a condition Initializing set to true.
*我可以看到您尝试更改此行为,但是我不确定您是否可以使用CronJob进行此操作,我看到了Jobs的示例.但是,我只是在理论上,如果这篇文章对您解决问题没有帮助,我可以尝试在实验室环境中重新创建它.
*I can see that you tried to change this behavior, but I am not sure if you can do that with CronJob, I saw examples with Jobs. But I am just theorizing, and if this post did not help you solve your issue I can try to recreate it in lab environment.
更多推荐
豆荚无限期地处于PodInitializing状态
发布评论