calico/node尚未准备好:BIRD尚未准备好:BGP未建立

编程入门 行业动态 更新时间:2024-10-11 11:24:52
本文介绍了calico/node尚未准备好:BIRD尚未准备好:BGP未建立的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在运行Kubernetes 1.13.2,使用kubeadm进行设置,并努力使calico 3.5正常运行.该集群在KVM之上运行.

I'm running Kubernetes 1.13.2, setup using kubeadm and struggling with getting calico 3.5 up and running. The cluster is run on top of KVM.

设置:

  • kubeadm init --apiserver-advertise-address=10.255.253.20 --pod-network-cidr=192.168.0.0/16
  • 修改后的calico.yaml文件包括:

  • kubeadm init --apiserver-advertise-address=10.255.253.20 --pod-network-cidr=192.168.0.0/16
  • modified calico.yaml file to include: - name: IP_AUTODETECTION_METHOD value: "interface=ens.*"

  • 应用了rbac.yaml,etcd.yaml,calico.yaml
  • applied rbac.yaml, etcd.yaml, calico.yaml
  • kubectl describe pods的输出:

    Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned kube-system/calico-node-hjwrc to k8s-master-01 Normal Pulling 23m kubelet, k8s-master-01 pulling image "quay.io/calico/cni:v3.5.0" Normal Pulled 23m kubelet, k8s-master-01 Successfully pulled image "quay.io/calico/cni:v3.5.0" Normal Created 23m kubelet, k8s-master-01 Created container Normal Started 23m kubelet, k8s-master-01 Started container Normal Pulling 23m kubelet, k8s-master-01 pulling image "quay.io/calico/node:v3.5.0" Normal Pulled 23m kubelet, k8s-master-01 Successfully pulled image "quay.io/calico/node:v3.5.0" Warning Unhealthy 23m kubelet, k8s-master-01 Readiness probe failed: calico/node is not ready: felix is not ready: Get localhost:9099/readiness: dial tcp [::1]:9099: connect: connection refused Warning Unhealthy 23m kubelet, k8s-master-01 Liveness probe failed: Get localhost:9099/liveness: dial tcp [::1]:9099: connect: connection refused Normal Created 23m (x2 over 23m) kubelet, k8s-master-01 Created container Normal Started 23m (x2 over 23m) kubelet, k8s-master-01 Started container Normal Pulled 23m kubelet, k8s-master-01 Container image "quay.io/calico/node:v3.5.0" already present on machine Warning Unhealthy 3m32s (x23 over 7m12s) kubelet, k8s-master-01 Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 10.255.253.22

    calicoctl node status的输出:

    Calico process is running. IPv4 BGP status +---------------+-------------------+-------+----------+---------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +---------------+-------------------+-------+----------+---------+ | 10.255.253.22 | node-to-node mesh | start | 16:24:44 | Passive | +---------------+-------------------+-------+----------+---------+ IPv6 BGP status No IPv6 peers found.

    ETCD_ENDPOINTS=localhost:6666 calicoctl get nodes -o yaml的输出:

    apiVersion: projectcalico/v3 items: - apiVersion: projectcalico/v3 kind: Node metadata: annotations: projectcalico/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"k8s-master-01","node-role.kubernetes.io/master":""}' creationTimestamp: 2019-01-31T16:08:56Z labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/hostname: k8s-master-01 node-role.kubernetes.io/master: "" name: k8s-master-01 resourceVersion: "28" uid: 82fee4dc-2572-11e9-8ab7-5254002c725d spec: bgp: ipv4Address: 10.255.253.20/24 ipv4IPIPTunnelAddr: 192.168.151.128 orchRefs: - nodeName: k8s-master-01 orchestrator: k8s - apiVersion: projectcalico/v3 kind: Node metadata: annotations: projectcalico/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"k8s-worker-01"}' creationTimestamp: 2019-01-31T16:24:44Z labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/hostname: k8s-worker-01 name: k8s-worker-01 resourceVersion: "170" uid: b7c2c5a6-2574-11e9-aaa4-5254007d5f6a spec: bgp: ipv4Address: 10.255.253.22/24 ipv4IPIPTunnelAddr: 192.168.36.192 orchRefs: - nodeName: k8s-worker-01 orchestrator: k8s kind: NodeList metadata: resourceVersion: "395"

    ETCD_ENDPOINTS=localhost:6666 calicoctl get bgppeers的输出:

    NAME PEERIP NODE ASN

    kubectl logs中的输出:

    2019-01-31 17:01:20.519 [INFO][48] int_dataplane.go 751: Applying dataplane updates 2019-01-31 17:01:20.519 [INFO][48] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet" 2019-01-31 17:01:20.519 [INFO][48] ipsets.go 254: Resyncing ipsets with dataplane. family="inet" 2019-01-31 17:01:20.523 [INFO][48] ipsets.go 304: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=3.675284ms 2019-01-31 17:01:20.523 [INFO][48] int_dataplane.go 765: Finished applying updates to dataplane. msecToApply=4.124166000000001 bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 36329) bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 52383) 2019-01-31 17:01:23.182 [INFO][48] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true} bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 39661) 2019-01-31 17:01:25.433 [INFO][48] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true} bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 57359) bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 47151) bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 39243) 2019-01-31 17:01:30.943 [INFO][48] int_dataplane.go 751: Applying dataplane updates 2019-01-31 17:01:30.943 [INFO][48] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet" 2019-01-31 17:01:30.943 [INFO][48] ipsets.go 254: Resyncing ipsets with dataplane. family="inet" 2019-01-31 17:01:30.945 [INFO][48] ipsets.go 304: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=2.369997ms 2019-01-31 17:01:30.946 [INFO][48] int_dataplane.go 765: Finished applying updates to dataplane. msecToApply=2.8165820000000004 bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 60641) 2019-01-31 17:01:33.190 [INFO][48] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}

    注意:以上未知地址(10.255.253.14)是KVM主机上br0下的IP,不太确定为什么会出现.

    Note: the above unknown address (10.255.253.14) is the IP under br0 on the KVM host, not too sure why it's made an appearance.

    推荐答案

    我得到了解决方案:

    ifconig(在我的情况下)的第一次尝试是尝试连接来连接不是正确ip的woker-nodes.

    The first perference of ifconig(in my case ) through that it will try connect to connect woker-nodes which is not the right ip.

    解决方案:

    使用以下步骤将calico.yaml文件更改为将该IP覆盖为etho-ip.

    Change the calico.yaml file to override that ip to etho-ip by using following steps.

    需要打开端口 Calico网络(BGP)-TCP 179

    # Specify interface - name: IP_AUTODETECTION_METHOD value: "interface=eth1"

    calico.yaml

    --- # Source: calico/templates/calico-config.yaml # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Typha is disabled. typha_service_name: "none" # Configure the backend to use. calico_backend: "bird" # Configure the MTU to use veth_mtu: "1440" # The CNI network configuration to install on each node. The special # values in this config will be automatically populated. cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log_level": "info", "datastore_type": "kubernetes", "nodename": "__KUBERNETES_NODE_NAME__", "mtu": __CNI_MTU__, "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} } ] } --- # Source: calico/templates/kdd-crds.yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: felixconfigurations.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: FelixConfiguration plural: felixconfigurations singular: felixconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamblocks.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: IPAMBlock plural: ipamblocks singular: ipamblock --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: blockaffinities.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: BlockAffinity plural: blockaffinities singular: blockaffinity --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamhandles.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: IPAMHandle plural: ipamhandles singular: ipamhandle --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamconfigs.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: IPAMConfig plural: ipamconfigs singular: ipamconfig --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: bgppeers.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: BGPPeer plural: bgppeers singular: bgppeer --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: bgpconfigurations.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: BGPConfiguration plural: bgpconfigurations singular: bgpconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ippools.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: IPPool plural: ippools singular: ippool --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: hostendpoints.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: HostEndpoint plural: hostendpoints singular: hostendpoint --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterinformations.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: ClusterInformation plural: clusterinformations singular: clusterinformation --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworkpolicies.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: GlobalNetworkPolicy plural: globalnetworkpolicies singular: globalnetworkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworksets.crd.projectcalico spec: scope: Cluster group: crd.projectcalico version: v1 names: kind: GlobalNetworkSet plural: globalnetworksets singular: globalnetworkset --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networkpolicies.crd.projectcalico spec: scope: Namespaced group: crd.projectcalico version: v1 names: kind: NetworkPolicy plural: networkpolicies singular: networkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networksets.crd.projectcalico spec: scope: Namespaced group: crd.projectcalico version: v1 names: kind: NetworkSet plural: networksets singular: networkset --- # Source: calico/templates/rbac.yaml # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers rules: # Nodes are watched to monitor for deletions. - apiGroups: [""] resources: - nodes verbs: - watch - list - get # Pods are queried to check for existence. - apiGroups: [""] resources: - pods verbs: - get # IPAM resources are manipulated when nodes are deleted. - apiGroups: ["crd.projectcalico"] resources: - ippools verbs: - list - apiGroups: ["crd.projectcalico"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete # Needs access to update clusterinformations. - apiGroups: ["crd.projectcalico"] resources: - clusterinformations verbs: - get - create - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list # Used to discover Typhas. - get - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch # Calico stores some configuration information in node annotations. - update # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list # Used by Calico for policy information. - apiGroups: [""] resources: - pods - namespaces - serviceaccounts verbs: - list - watch # The CNI plugin patches pods/status. - apiGroups: [""] resources: - pods/status verbs: - patch # Calico monitors various CRDs for config. - apiGroups: ["crd.projectcalico"] resources: - globalfelixconfigs - felixconfigurations - bgppeers - globalbgpconfigs - bgpconfigurations - ippools - ipamblocks - globalnetworkpolicies - globalnetworksets - networkpolicies - networksets - clusterinformations - hostendpoints verbs: - get - list - watch # Calico must create and update some CRDs on startup. - apiGroups: ["crd.projectcalico"] resources: - ippools - felixconfigurations - clusterinformations verbs: - create - update # Calico stores some configuration information on the node. - apiGroups: [""] resources: - nodes verbs: - get - list - watch # These permissions are only requried for upgrade from v2.6, and can # be removed after upgrade or on fresh installations. - apiGroups: ["crd.projectcalico"] resources: - bgpconfigurations - bgppeers verbs: - create - update # These permissions are required for Calico CNI to perform IPAM allocations. - apiGroups: ["crd.projectcalico"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete - apiGroups: ["crd.projectcalico"] resources: - ipamconfigs verbs: - get # Block affinities must also be watchable by confd for route aggregation. - apiGroups: ["crd.projectcalico"] resources: - blockaffinities verbs: - watch # The Calico IPAM migration needs to get daemonsets. These permissions can be # removed if not upgrading from an installation using host-local IPAM. - apiGroups: ["apps"] resources: - daemonsets verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system --- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well # as the CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 priorityClassName: system-node-critical initContainers: # This container performs upgrade from host-local IPAM to calico-ipam. # It can be deleted if this is a fresh installation, or if you have already # upgraded to use calico-ipam. - name: upgrade-ipam image: calico/cni:v3.8.2 command: ["/opt/cni/bin/calico-ipam", "-upgrade"] env: - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend volumeMounts: - mountPath: /var/lib/cni/networks name: host-local-net-dir - mountPath: /host/opt/cni/bin name: cni-bin-dir # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni image: calico/cni:v3.8.2 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # Set the hostname based on the k8s node name. - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes # to communicate with Felix over the Policy Sync API. - name: flexvol-driver image: calico/pod2daemon-flexvol:v3.8.2 volumeMounts: - name: flexvol-driver-host mountPath: /host/driver containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: calico/node:v3.8.2 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Specify interface - name: IP_AUTODETECTION_METHOD value: "interface=eth1" # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: httpGet: path: /liveness port: 9099 host: localhost periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -bird-ready - -felix-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - name: policysync mountPath: /var/run/nodeagent volumes: # Used by calico-node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the directory for host-local IPAM allocations. This is # used when upgrading from host-local to calico-ipam, and can be removed # if not using the upgrade-ipam init container. - name: host-local-net-dir hostPath: path: /var/lib/cni/networks # Used to create per-pod Unix Domain Sockets - name: policysync hostPath: type: DirectoryOrCreate path: /var/run/nodeagent # Used to install Flex Volume Driver - name: flexvol-driver-host hostPath: type: DirectoryOrCreate path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # Source: calico/templates/calico-kube-controllers.yaml # See github/projectcalico/kube-controllers apiVersion: apps/v1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: # The controllers can only have a single active instance. replicas: 1 selector: matchLabels: k8s-app: calico-kube-controllers strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical containers: - name: calico-kube-controllers image: calico/kube-controllers:v3.8.2 env: # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: node - name: DATASTORE_TYPE value: kubernetes readinessProbe: exec: command: - /usr/bin/check-status - -r --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system --- # Source: calico/templates/calico-etcd-secrets.yaml --- # Source: calico/templates/calico-typha.yaml --- # Source: calico/templates/configure-canal.yaml

    更多推荐

    calico/node尚未准备好:BIRD尚未准备好:BGP未建立

    本文发布于:2023-11-10 14:23:44,感谢您对本站的认可!
    本文链接:https://www.elefans.com/category/jswz/34/1575613.html
    版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
    本文标签:node   calico   BGP   BIRD

    发布评论

    评论列表 (有 0 条评论)
    草根站长

    >www.elefans.com

    编程频道|电子爱好者 - 技术资讯及电子产品介绍!