k8s集群中namespace状态一直显示Terminating

编程入门 行业动态 更新时间:2024-10-25 20:30:00

k8s<a href=https://www.elefans.com/category/jswz/34/1771240.html style=集群中namespace状态一直显示Terminating"/>

k8s集群中namespace状态一直显示Terminating

一、问题现象

今天在做测试时,在一个namespace下无法启动pod,查看ns状态一直显示Terminating

[root@node1 ~]# kubectl get ns
NAME                   STATUS        AGE
configmap              Terminating   135d
default                Active        207d
harbor                 Active        207d
kube-flannel           Terminating   17m
kube-node-lease        Active        207d
kube-public            Active        207d
kube-system            Active        207d
kubekey-system         Active        207d
kubernetes-dashboard   Active        207d
local-path-storage     Active        187d
nginx                  Active        146d
test                   Terminating   126d

使用–force 删除也是一直卡着的状态

[root@node1 ~]# kubectl delete ns test --force 
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
namespace "test" force deleted

二、查看ns下的资源

根据以上现象,怀疑是该ns下有未释放的资源,使用如下命令查看

1: 查看所有资源
[root@node1 ~]# kubectl get all -n  test
No resources found in test namespace.
[root@node1 ~]# 
以上得知该ns下没有任何未释放的资源##################
我看网上有使用kubectl api-resources -o name --verbs=list --namespaced | xargs -n 1 kubectl get --show-kind --ignore-not-found -n test这个命令查看的,也是返回结果为空

三、问题处理

以json格式导出ns的详细信息

[root@node1 ~]# kubectl get  ns test -o json  > test.json

编辑test.json文件,确保spec中内容为空,如下:

    "spec": {"finalizers": [    #########"kubernetes"   ######### 删除这三行内容,告知k8s要删除的ns中内容为空]                  #########},

将空ns通过调用k8s的api接口覆盖掉原来的ns

[root@node1 ~]# curl -k \
> -H "Content-Type: application/json" \
> -X PUT \
> --data-binary @test.json \
> http://127.0.0.1:8081/api/v1/namespaces/test/finalize
curl: (7) Failed connect to 127.0.0.1:8081; Connection refused因为k8s主节点使用了认证,如果直接使用命令会拒绝连接,需要使用kube-proxy进行代理8081端口

使用kube-proxy开启端口

[root@node1 ~]# kubectl proxy --port=8081 
Starting to serve on 127.0.0.1:8081

打开新的终端,再次执行上述命令,返回如下内容

[root@node1 ~]# curl -k -H "Content-Type: application/json" -X PUT --data-binary @test.json http://127.0.0.1:8081/api/v1/namespaces/test/finalize
{"kind": "Namespace","apiVersion": "v1","metadata": {"name": "test","uid": "f2676c45-a75b-49be-9e01-84958bedc4a0","resourceVersion": "33942216","creationTimestamp": "2023-06-28T06:02:07Z","deletionTimestamp": "2023-06-28T06:02:24Z","labels": {"kubernetes.io/metadata.name": "test"},"annotations": {"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2023-06-28T06:02:07Z\",\"deletionTimestamp\":\"2023-06-28T06:02:24Z\",\"labels\":{\"kubernetes.io/metadata.name\":\"test\"},\"name\":\"test\",\"resourceVersion\":\"28081915\",\"uid\":\"f2676c45-a75b-49be-9e01-84958bedc4a0\"},\"spec\":{},\"status\":{\"conditions\":[{\"lastTransitionTime\":\"2023-06-28T06:02:29Z\",\"message\":\"Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server (\\\"Internal Server Error: \\\\\\\"/apis/metrics.k8s.io/v1beta1\\\\\\\": the server could not find the requested resource\\\") has prevented the request from succeeding\",\"reason\":\"DiscoveryFailed\",\"status\":\"True\",\"type\":\"NamespaceDeletionDiscoveryFailure\"},{\"lastTransitionTime\":\"2023-06-28T06:02:29Z\",\"message\":\"All legacy kube types successfully parsed\",\"reason\":\"ParsedGroupVersions\",\"status\":\"False\",\"type\":\"NamespaceDeletionGroupVersionParsingFailure\"},{\"lastTransitionTime\":\"2023-06-28T06:02:29Z\",\"message\":\"All content successfully deleted, may be waiting on finalization\",\"reason\":\"ContentDeleted\",\"status\":\"False\",\"type\":\"NamespaceDeletionContentFailure\"},{\"lastTransitionTime\":\"2023-06-28T06:02:29Z\",\"message\":\"All content successfully removed\",\"reason\":\"ContentRemoved\",\"status\":\"False\",\"type\":\"NamespaceContentRemaining\"},{\"lastTransitionTime\":\"2023-06-28T06:02:29Z\",\"message\":\"All content-preserving finalizers finished\",\"reason\":\"ContentHasNoFinalizers\",\"status\":\"False\",\"type\":\"NamespaceFinalizersRemaining\"}],\"phase\":\"Terminating\"}}\n"},"managedFields": [{"manager": "kubectl-create","operation": "Update","apiVersion": "v1","time": "2023-06-28T06:02:07Z","fieldsType": "FieldsV1","fieldsV1": {"f:metadata": {"f:labels": {".": {},"f:kubernetes.io/metadata.name": {}}}}},{"manager": "kube-controller-manager","operation": "Update","apiVersion": "v1","time": "2023-09-27T07:02:23Z","fieldsType": "FieldsV1","fieldsV1": {"f:status": {"f:conditions": {".": {},"k:{\"type\":\"NamespaceContentRemaining\"}": {".": {},"f:lastTransitionTime": {},"f:message": {},"f:reason": {},"f:status": {},"f:type": {}},"k:{\"type\":\"NamespaceDeletionContentFailure\"}": {".": {},"f:lastTransitionTime": {},"f:message": {},"f:reason": {},"f:status": {},"f:type": {}},"k:{\"type\":\"NamespaceDeletionDiscoveryFailure\"}": {".": {},"f:lastTransitionTime": {},"f:message": {},"f:reason": {},"f:status": {},"f:type": {}},"k:{\"type\":\"NamespaceDeletionGroupVersionParsingFailure\"}": {".": {},"f:lastTransitionTime": {},"f:message": {},"f:reason": {},"f:status": {},"f:type": {}},"k:{\"type\":\"NamespaceFinalizersRemaining\"}": {".": {},"f:lastTransitionTime": {},"f:message": {},"f:reason": {},"f:status": {},"f:type": {}}}}},"subresource": "status"},{"manager": "kubectl-client-side-apply","operation": "Update","apiVersion": "v1","time": "2023-11-02T02:46:01Z","fieldsType": "FieldsV1","fieldsV1": {"f:metadata": {"f:annotations": {".": {},"f:kubectl.kubernetes.io/last-applied-configuration": {}}}}}]},"spec": {},"status": {"phase": "Terminating","conditions": [{"type": "NamespaceDeletionDiscoveryFailure","status": "True","lastTransitionTime": "2023-06-28T06:02:29Z","reason": "DiscoveryFailed","message": "Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server (\"Internal Server Error: \\\"/apis/metrics.k8s.io/v1beta1\\\": the server could not find the requested resource\") has prevented the request from succeeding"},{"type": "NamespaceDeletionGroupVersionParsingFailure","status": "False","lastTransitionTime": "2023-06-28T06:02:29Z","reason": "ParsedGroupVersions","message": "All legacy kube types successfully parsed"},{"type": "NamespaceDeletionContentFailure","status": "False","lastTransitionTime": "2023-06-28T06:02:29Z","reason": "ContentDeleted","message": "All content successfully deleted, may be waiting on finalization"},{"type": "NamespaceContentRemaining","status": "False","lastTransitionTime": "2023-06-28T06:02:29Z","reason": "ContentRemoved","message": "All content successfully removed"},{"type": "NamespaceFinalizersRemaining","status": "False","lastTransitionTime": "2023-06-28T06:02:29Z","reason": "ContentHasNoFinalizers","message": "All content-preserving finalizers finished"}]}
}

查看ns是否被删除

[root@node1 ~]#kubectl get ns    ####名为test的ns已经被删除 
NAME                   STATUS        AGE
configmap              Terminating   135d
default                Active        207d
harbor                 Active        207d
kube-flannel           Terminating   21m
kube-node-lease        Active        207d
kube-public            Active        207d
kube-system            Active        207d
kubekey-system         Active        207d
kubernetes-dashboard   Active        207d
local-path-storage     Active        187d
nginx                  Active        146d其他Terminating状态的ns也可以使用上述方法删除
#######################
注意,ns的名字不要用和资源类型一样,要不然会把ns名称识别成资源类型报错。上面名为configmap的ns就是这个问题,导致删除报错无法清理,只能调用etcd进行删除。

更多推荐

k8s集群中namespace状态一直显示Terminating

本文发布于:2023-11-17 03:51:22,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1637071.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:集群   状态   k8s   Terminating   namespace

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!