在配置私有GKE集群时了解

编程入门 行业动态 更新时间:2024-10-23 11:28:00
本文介绍了在配置私有GKE集群时了解--master-ipv4-cidr的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我试图进一步了解我在Google的Kubernetes Engine中配置私有集群时到底发生了什么.

I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine.

Google在此提供了一个示例,该示例提供了一个私有集群,其中控制平面服务(例如Kubernetes API)位于172.16.0.16/28子网中.

Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet.

cloud.google/kubernetes -engine/docs/how-to/private-clusters

gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create-subnetwork ""

gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create-subnetwork ""

运行此命令时,我看到:

When I run this command, I see that:

  • 我现在在VPC中有几个gke子网属于节点和服务的群集子网.它们在10.x.x.x/8范围内.
  • 我在172.16/16地址空间中没有任何子网.
  • 我确实有一些似乎相关的新配对规则和路线.例如,有一个新路由"peering-route-a08d11779e9a3276",目标地址范围为"172.16.0.16/28",下一跳为"gke-62d565a060f347e0fba7-3094-3230-peer".然后,该对等角色指向"gke-62d565a060f347e0fba7-3094-bb01-net"

bash$ gcloud compute networks subnets list | grep us-west1 default us-west1 default 10.138.0.0/20 gke-insti3-subnet-62d565a0 us-west1 default 10.2.56.0/22

bash$ gcloud compute networks subnets list | grep us-west1 default us-west1 default 10.138.0.0/20 gke-insti3-subnet-62d565a0 us-west1 default 10.2.56.0/22

(venv) Christophers-MacBook-Pro:Stratus-Cloud christophermutzel$ gcloud compute networks peerings list NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS gke-62d565a060f347e0fba7-3094-3230-peer default gke-prod-us-west1-a-4180 gke-62d565a060f347e0fba7-3094-bb01-net True ACTIVE [2018-08-23T16:42:31.351-07:00]: Connected.

(venv) Christophers-MacBook-Pro:Stratus-Cloud christophermutzel$ gcloud compute networks peerings list NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS gke-62d565a060f347e0fba7-3094-3230-peer default gke-prod-us-west1-a-4180 gke-62d565a060f347e0fba7-3094-bb01-net True ACTIVE [2018-08-23T16:42:31.351-07:00]: Connected.

gke-62d565a060f347e0fba7-3094-bb01-net是否是Google为GKE服务管理的对等VPC,其中Kubernetes管理端点位于该对等VPC(控制平面中的值在172.16/16范围内)?

Is gke-62d565a060f347e0fba7-3094-bb01-net a peered VPC in which the Kubernetes management endpoints live (the control plane stuff in the 172.16/16 range) that Google is managing for the GKE service?

此外-我的请求如何发送到Kubernetes API服务器?

Further - how are my requests making it to the Kubernetes API server?

推荐答案

私有群集功能取决于别名IP范围 VPC网络功能,因此在创建私有集群时会发生很多事情:

The Private Cluster feature of GKE depends on the Alias IP Ranges feature of VPC networking, so there are multiple things happening when you create a private cluster:

  • --enable-ip-alias标志告诉GKE使用具有两个辅助IP范围的子网:一个用于Pod,一个用于服务.这样,VPC网络就可以了解群集中的所有IP地址并适当地路由流量.

  • The --enable-ip-alias flag tells GKE to use a subnetwork that has two secondary IP ranges: one for pods and one for services. This allows the VPC network to understand all the IP addresses in your cluster and route traffic appropriately.

--create-subnetwork标志告诉GKE创建一个新的子网(在您的情况下为gke-insti3-subnet-62d565a0)并自动选择其主要范围和辅助范围.请注意,您可以使用--cluster-ipv4-cidr和--services-ipv4-cidr自己选择辅助范围.或者,您甚至可以自己创建子网,并告诉GKE将其与标志--subnetwork,--cluster-secondary-range-name和--services-secondary-range-name一起使用.

The --create-subnetwork flag tells GKE to create a new subnetwork (gke-insti3-subnet-62d565a0 in your case) and choose its primary and secondary ranges automatically. Note that you could instead choose the secondary ranges yourself with --cluster-ipv4-cidr and --services-ipv4-cidr. Or you could even create the subnetwork yourself and tell GKE to use it with the flags --subnetwork, --cluster-secondary-range-name, and --services-secondary-range-name.

--private-cluster标志告诉GKE在Google拥有的项目中创建一个新的VPC网络(在您的情况下为gke-62d565a060f347e0fba7-3094-bb01-net),然后使用 VPC网络对等. Kubernetes管理端点位于您使用--master-ipv4-cidr指定的范围内(在您的情况下为172.16.0.16/28).还在其中创建了内部负载均衡器. Google拥有的项目,这是您的工作节点与之通信的内容.在区域集群.您可以在gcloud beta container clusters describe的输出中的privateEndpoint字段中找到此内部IP地址.要了解的重要一点是,由于两个网络之间的VPC对等,因此主VM和辅助节点VM之间的所有通信都是通过内部IP地址进行的.

The --private-cluster flag tells GKE to create a new VPC network (gke-62d565a060f347e0fba7-3094-bb01-net in your case) in a Google-owned project and connect it to your VPC network using VPC Network Peering. The Kubernetes management endpoints live in the range you specify with --master-ipv4-cidr (172.16.0.16/28 in your case). An Internal Load Balancer is also created in the Google-owned project and this is what your worker nodes communicate with. This ILB allows traffic to be load-balanced across multiple VMs in the case of a Regional Cluster. You can find this internal IP address as the privateEndpoint field in the output of gcloud beta container clusters describe. The important thing to understand is that all communication between master VMs and worker node VMs happens over internal IP addresses, thanks to the VPC peering between the two networks.

您的专用集群也具有一个外部IP地址,您可以在gcloud beta container clusters describe的输出中的endpoint字段中找到该IP地址.工作者节点不使用此功能,但客户通常使用它使用kubectl.

Your private cluster also has an external IP address, which you can find as the endpoint field in the output of gcloud beta container clusters describe. This is not used by the worker nodes, but is typically used by customers to manage their cluster remotely, e.g., using kubectl.

您可以使用主授权网络

You can use the Master Authorized Networks feature to restrict which IP ranges (both internal and external) have access to the management endpoints. This feature is strongly recommended for private clusters, and is enabled by default when you create the cluster using the gcloud CLI.

希望这会有所帮助!

更多推荐

在配置私有GKE集群时了解

本文发布于:2023-11-25 14:07:15,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1630064.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:集群   GKE

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!