AWS负载均衡器未向实例注册

编程入门 行业动态 更新时间:2024-10-28 14:31:47
本文介绍了AWS负载均衡器未向实例注册的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我使用 kubeadm 在 AWS 上启动集群.我可以使用 kubectl 在 AWS 上成功创建负载均衡器,但是该负载均衡器未在任何EC2实例中注册.这会导致无法从公共访问服务的问题.

I use kubeadm to launch cluster on AWS. I can successfully create a load balancer on AWS by using kubectl, but the load balancer is not registered with any EC2 instances. That causes problem that the service cannot be accessed from public.

从观察结果看,创建ELB时,它无法在所有子网下找到任何正常的实例.我很确定我可以正确标记所有实例.

From the observation, when the ELB is created, it cannot find any healthy instances under all subnets. I am pretty sure I tag all my instances correctly.

已更新:我正在从 k8s-controller-manager 中读取日志,它显示我的节点未设置ProviderID.并根据 Github 评论,ELB将忽略无法从提供者确定实例ID的节点.这会引起问题吗?我应该如何设置providerID?

Updated: I am reading the log from k8s-controller-manager, it shows my node does not have ProviderID set. And according to Github comment, ELB will ignore nodes where instance ID cannot be determined from provider. Could this cause the issue? How Should I set the providerID?

apiVersion: v1 kind: Service metadata: name: load-balancer annotations: service.beta.kubernetes.io/aws-load-balancer-type: "elb" spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443 selector: app: replica type: LoadBalancer

部署配置

apiVersion: apps/v1 kind: Deployment metadata: name: replica-deployment labels: app: replica spec: replicas: 1 selector: matchLabels: app: replica template: metadata: labels: app: replica spec: containers: - name: web image: web imagePullPolicy: IfNotPresent ports: - containerPort: 80 - containerPort: 443 command: ["/bin/bash"] args: ["-c", "script_to_start_server.sh"]

节点输出status部分

node output status section

status: addresses: - address: 172.31.35.209 type: InternalIP - address: k8s type: Hostname allocatable: cpu: "4" ephemeral-storage: "119850776788" hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 16328856Ki pods: "110" capacity: cpu: "4" ephemeral-storage: 130046416Ki hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 16431256Ki pods: "110" conditions: - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has sufficient disk space available reason: KubeletHasSufficientDisk status: "False" type: OutOfDisk - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has sufficient memory available reason: KubeletHasSufficientMemory status: "False" type: MemoryPressure - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has no disk pressure reason: KubeletHasNoDiskPressure status: "False" type: DiskPressure - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet has sufficient PID available reason: KubeletHasSufficientPID status: "False" type: PIDPressure - lastHeartbeatTime: 2018-07-12T04:01:54Z lastTransitionTime: 2018-07-11T15:45:06Z message: kubelet is posting ready status. AppArmor enabled reason: KubeletReady status: "True" type: Ready

如何解决此问题?

谢谢!

推荐答案

在我的情况下,问题是工作节点未正确分配providerId.

In My case the issue was with the worker nodes not getting the providerId assigned properly.

我设法对节点进行了修补-kubectl修补节点ip-xxxxx.ap-southeast-2pute.internal -p'{"spec":{"providerID":"aws:///ap-southeast- 2a/i-0xxxxx}}'

I managed to patch the node like - kubectl patch node ip-xxxxx.ap-southeast-2pute.internal -p '{"spec":{"providerID":"aws:///ap-southeast-2a/i-0xxxxx"}}'

添加ProviderID.然后,当我部署该服务时. ELB已创建.节点组已添加,并自始至终起作用.这不是一个简单的答案.但是,直到我找到更好的解决方案,让我们留在这里

to add the ProviderID. And then when i deployed the service . The ELB got created. the node group got added and end to end it worked. This is not a straight forward answer . But until i find a better solution let remain here

更多推荐

AWS负载均衡器未向实例注册

本文发布于:2023-11-01 20:49:23,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1550380.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:均衡器   负载   实例   未向   AWS

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!