kubernetes集群编排——service微服务

编程入门 行业动态 更新时间:2024-10-14 12:25:10

kubernetes<a href=https://www.elefans.com/category/jswz/34/1771240.html style=集群编排——service微服务"/>

kubernetes集群编排——service微服务

service微服务

创建测试示例

vim myapp.yml
apiVersion: apps/v1kind: Deploymentmetadata:labels:app: myappname: myappspec:replicas: 6selector:matchLabels:app: myapptemplate:metadata:labels:app: myappspec:containers:- image: myapp:v1name: myapp---apiVersion: v1kind: Servicemetadata:labels:app: myappname: myappspec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapptype: ClusterIP
kubectl apply -f myapp.ymlkubectl get svc

默认使用iptables调度

ipvs模式

修改proxy配置

kubectl -n kube-system edit  cm kube-proxy
...mode: "ipvs"

重启pod

kubectl -n kube-system get pod|grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'

切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配service IP

ip aipvsadm -ln

clusterip

clusterip模式只能在集群内访问

vim myapp.yml
---apiVersion: v1kind: Servicemetadata:labels:app: myappname: myappspec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapptype: ClusterIP

service创建后集群DNS提供解析

kubectl get svcdig -t A myapp.default.svc.cluster.local. @10.96.0.10

headless

vim myapp.yml
---apiVersion: v1kind: Servicemetadata:labels:app: myappname: myappspec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapptype: ClusterIPclusterIP: None
kubectl delete svc myappkubectl apply -f myapp.yml

headless模式不分配vip

kubectl get svc

headless通过svc名称访问,由集群内dns提供解析

dig -t A myapp.default.svc.cluster.local. @10.96.0.10

集群内直接使用service名称访问

kubectl run demo --image busyboxplus -it --rmnslookup myappcat /etc/resolv.confcurl myappcurl myapp/hostname.html

nodeport

vim myapp.yml
---apiVersion: v1kind: Servicemetadata:labels:app: myappname: myappspec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapptype: NodePort
kubectl apply -f myapp.ymlkubectl get svc

nodeport在集群节点上绑定端口,一个端口对应一个服务

curl 192.168.92.12:32195

loadbalancer

vim myapp.yml
---apiVersion: v1kind: Servicemetadata:labels:app: myappname: myappspec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapptype: LoadBalancer
kubectl apply -f myapp.yml

默认无法分配外部访问IP

kubectl get svc

LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

metallb

官网:/

kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: "ipvs"ipvs:strictARP: true
kubectl -n kube-system get pod|grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'

下载部署文件

wget .13.12/config/manifests/metallb-native.yaml

修改文件中镜像地址,与harbor仓库路径保持一致

vim metallb-native.yaml
...image: metallb/speaker:v0.13.12image: metallb/controller:v0.13.12

上传镜像到harbor

docker pull quay.io/metallb/controller:v0.13.12
docker pull quay.io/metallb/speaker:v0.13.12docker tag quay.io/metallb/controller:v0.13.12 reg.westos/metallb/controller:v0.13.12
docker tag quay.io/metallb/speaker:v0.13.12 reg.westos/metallb/speaker:v0.13.12docker push reg.westos/metallb/controller:v0.13.12
docker push reg.westos/metallb/speaker:v0.13.12

部署服务

kubectl apply -f metallb-native.yamlkubectl -n metallb-system get pod

配置分配地址段

vim config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:name: first-poolnamespace: metallb-system
spec:addresses:- 192.168.92.100-192.168.92.200---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:name: examplenamespace: metallb-system
spec:ipAddressPools:- first-pool
​
kubectl apply -f config.yamlkubectl get svc

通过分配地址从集群外访问服务

curl 192.168.92.100curl 192.168.92.100/hostname.htmlcurl 192.168.92.100/hostname.htmlcurl 192.168.92.100/hostname.html

nodeport默认端口

vim myapp.yml
apiVersion: v1kind: Servicemetadata:labels:app: myappname: myappspec:ports:- port: 80protocol: TCPtargetPort: 80nodePort: 33333selector:app: myapptype: NodePort
kubectl apply -f myapp.yml

nodeport默认端口是30000-32767,超出会报错

vim /etc/kubernetes/manifests/kube-apiserver.yaml

添加如下参数,端口范围可以自定义

- --service-node-port-range=30000-50000

修改后api-server会自动重启,等apiserver正常启动后才能操作集群

externalname

vim externalname.yaml
apiVersion: v1kind: Servicemetadata:name: my-servicespec:type:  ExternalNameexternalName: www.westos
​kubectl apply -f externalname.yamldig -t A my-service.default.svc.cluster.local. @10.96.0.10

ingress-nginx

部署

官网:

下载部署文件

 wget .8.2/deploy/static/provider/baremetal/deploy.yaml

上传镜像到harbor

docker pull dyrnq/ingress-nginx-controller:v1.8.2docker pull dyrnq/kube-webhook-certgen:v20230407docker tag dyrnq/ingress-nginx-controller:v1.8.2 reg.westos/ingress-nginx/controller:v1.8.2docker tag dyrnq/kube-webhook-certgen:v20230407 reg.westos/ingress-nginx/kube-webhook-certgen:v20230407docker push reg.westos/ingress-nginx/controller:v1.8.2docker push reg.westos/ingress-nginx/kube-webhook-certgen:v20230407

修改3个镜像路径

vim deploy.yaml
...
image: ingress-nginx/controller:v1.8.2
...
image: ingress-nginx/kube-webhook-certgen:v20230407
...
image: ingress-nginx/kube-webhook-certgen:v20230407

kubectl apply -f deploy.yamlkubectl -n ingress-nginx get pod

kubectl -n ingress-nginx get svc

修改为LoadBalancer方式

kubectl -n ingress-nginx edit  svc ingress-nginx-controller
type: LoadBalancer

kubectl -n ingress-nginx get svc

创建ingress策略

vim ingress.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: minimal-ingressspec:ingressClassName: nginxrules:- http:paths:- path: /pathType: Prefixbackend:service:name: myappport:number: 80
kubectl apply -f ingress.yml
kubectl get podkubectl get svckubectl get ingress

ingress必须和输出的service资源处于同一namespace

测试

curl 192.168.92.100

回收资源

kubectl delete  -f myapp.ymlkubectl delete  -f ingress.yml

基于路径访问

文档: /

创建svc

vim myapp-v1.yml
apiVersion: apps/v1kind: Deploymentmetadata:labels:app: myapp-v1name: myapp-v1spec:replicas: 3selector:matchLabels:app: myapp-v1template:metadata:labels:app: myapp-v1spec:containers:- image: myapp:v1name: myapp-v1---apiVersion: v1kind: Servicemetadata:labels:app: myapp-v1name: myapp-v1spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapp-v1type: ClusterIP
kubectl apply -f myapp-v1.yml
vim myapp-v2.yml
apiVersion: apps/v1kind: Deploymentmetadata:labels:app: myapp-v2name: myapp-v2spec:replicas: 3selector:matchLabels:app: myapp-v2template:metadata:labels:app: myapp-v2spec:containers:- image: myapp:v2name: myapp-v2---apiVersion: v1kind: Servicemetadata:labels:app: myapp-v2name: myapp-v2spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapp-v2type: ClusterIP
kubectl apply -f myapp-v2.yml
kubectl get svc

创建ingress

vim ingress1.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: minimal-ingressannotations:nginx.ingress.kubernetes.io/rewrite-target: /spec:ingressClassName: nginxrules:- host: myapp.westoshttp:paths:- path: /v1pathType: Prefixbackend:service:name: myapp-v1port:number: 80- path: /v2pathType: Prefixbackend:service:name: myapp-v2port:number: 80
kubectl apply -f ingress1.ymlkubectl describe ingress minimal-ingress

测试

vim /etc/hosts
...
192.168.92.100 myapp.westos myapp1.westos myapp2.westos

curl  myapp.westos/v1curl  myapp.westos/v2

回收

kubectl delete -f ingress1.yml

基于域名访问

vim ingress2.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: minimal-ingressspec:ingressClassName: nginxrules:- host: myapp1.westoshttp:paths:- path: /pathType: Prefixbackend:service:name: myapp-v1port:number: 80- host: myapp2.westoshttp:paths:- path: /pathType: Prefixbackend:service:name: myapp-v2port:number: 80
kubectl apply -f ingress2.ymlkubectl describe ingress minimal-ingress

测试

curl  myapp1.westoscurl  myapp2.westos

回收

kubectl delete -f ingress2.yml

TLS加密

创建证书

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"kubectl create secret tls tls-secret --key tls.key --cert tls.crt

vim ingress3.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: ingress-tlsspec:tls:- hosts:- myapp.westossecretName: tls-secretingressClassName: nginxrules:- host: myapp.westoshttp:paths:- path: /pathType: Prefixbackend:service:name: myapp-v1port:number: 80
kubectl apply -f ingress3.ymlkubectl describe ingress ingress-tls

测试

curl -k 

auth认证

创建认证文件

yum install -y httpd-toolshtpasswd -c auth hjlcat authkubectl create secret generic basic-auth --from-file=auth

vim ingress3.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: ingress-tlsannotations:nginx.ingress.kubernetes.io/auth-type: basicnginx.ingress.kubernetes.io/auth-secret: basic-authnginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - hjl'spec:tls:- hosts:- myapp.westossecretName: tls-secretingressClassName: nginxrules:- host: myapp.westoshttp:paths:- path: /pathType: Prefixbackend:service:name: myapp-v1port:number: 80
kubectl apply -f ingress3.ymlkubectl describe ingress ingress-tls

测试

curl -k  -u hjl:westos

rewrite重定向

示例1:

vim ingress3.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: ingress-tlsannotations:nginx.ingress.kubernetes.io/auth-type: basicnginx.ingress.kubernetes.io/auth-secret: basic-authnginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - hjl'nginx.ingress.kubernetes.io/app-root: /hostname.htmlspec:tls:- hosts:- myapp.westossecretName: tls-secretingressClassName: nginxrules:- host: myapp.westoshttp:paths:- path: /pathType: Prefixbackend:service:name: myapp-v1port:number: 80
kubectl apply -f ingress3.ymlkubectl describe ingress ingress-tls

测试

curl -k  -u hjl:westos -I

示例二:

vim ingress3.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: ingress-tlsannotations:nginx.ingress.kubernetes.io/auth-type: basicnginx.ingress.kubernetes.io/auth-secret: basic-authnginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - hjl'#nginx.ingress.kubernetes.io/app-root: /hostname.htmlnginx.ingress.kubernetes.io/use-regex: "true"nginx.ingress.kubernetes.io/rewrite-target: /$2spec:tls:- hosts:- myapp.westossecretName: tls-secretingressClassName: nginxrules:- host: myapp.westoshttp:paths:- path: /pathType: Prefixbackend:service:name: myapp-v1port:number: 80- path: /westos(/|$)(.*)pathType: ImplementationSpecificbackend:service:name: myapp-v1port:number: 80
kubectl apply -f ingress3.ymlkubectl describe ingress ingress-tls

测试

curl -k  -u hjl:westoscurl -k .html -u hjl:westos

回收

kubectl delete -f ingress3.yml

canary金丝雀发布

基于header灰度

vim ingress4.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: myapp-v1-ingressspec:ingressClassName: nginxrules:- host: myapp.westoshttp:paths:- pathType: Prefixpath: /backend:service:name: myapp-v1port:number: 80
kubectl apply -f ingress4.ymlkubectl get ingress

vim ingress5.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:annotations:nginx.ingress.kubernetes.io/canary: "true"nginx.ingress.kubernetes.io/canary-by-header: stagenginx.ingress.kubernetes.io/canary-by-header-value: grayname: myapp-v2-ingressspec:ingressClassName: nginxrules:- host: myapp.westoshttp:paths:- pathType: Prefixpath: /backend:service:name: myapp-v2port:number: 80
[root@k8s2 ingress]# kubectl apply -f ingress5.ymlkubectl describe ingress myapp-v2-ingress

测试

curl  myapp.westoscurl -H "stage: gray" myapp.westos

基于权重灰度

vim ingress5.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:annotations:nginx.ingress.kubernetes.io/canary: "true"#nginx.ingress.kubernetes.io/canary-by-header: stage#nginx.ingress.kubernetes.io/canary-by-header-value: graynginx.ingress.kubernetes.io/canary-weight: "50"nginx.ingress.kubernetes.io/canary-weight-total: "100"name: myapp-v2-ingressspec:ingressClassName: nginxrules:- host: myapp.westoshttp:paths:- pathType: Prefixpath: /backend:service:name: myapp-v2port:number: 80
kubectl apply -f ingress5.ymlkubectl describe ingress myapp-v2-ingress

测试

vim ingress.sh
#!/bin/bashv1=0
v2=0
for (( i=0; i<100; i++))
doresponse=`curl -s myapp.westos |grep -c v1`v1=`expr $v1 + $response`v2=`expr $v2 + 1 - $response`
done
echo "v1:$v1, v2:$v2"
sh ingress.sh

回收

kubectl delete -f ingress5.yml

业务域拆分

vim ingress6.yml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: /$1name: rewrite-ingressspec:ingressClassName: nginxrules:- host: myapp.westoshttp:paths:- path: /user/(.*)pathType: Prefixbackend:service:name: myapp-v1port:number: 80- path: /order/(.*)pathType: Prefixbackend:service:name: myapp-v2port:number: 80
kubectl apply -f ingress6.ymlkubectl describe ingress rewrite-ingress

测试

curl  myapp.westoscurl  myapp.westos/user/hostname.htmlcurl  myapp.westos/order/hostname.html

回收

kubectl delete -f ingress6.yml

flannel网络插件

使用host-gw模式

kubectl -n kube-flannel edit  cm kube-flannel-cfg

重启pod生效

kubectl -n kube-flannel delete  pod --all

calico网络插件

部署

删除flannel插件

kubectl delete  -f kube-flannel.yml

删除所有节点上flannel配置文件,避免冲突

[root@k8s2 ~]# rm -f /etc/cni/net.d/10-flannel.conflist[root@k8s3 ~]# rm -f /etc/cni/net.d/10-flannel.conflist[root@k8s4 ~]# rm -f /etc/cni/net.d/10-flannel.conflist

下载部署文件

wget  .25.0/manifests/calico.yaml

修改镜像路径

vim calico.yaml

下载镜像

docker push reg.westos/calico/kube-controllers:v3.25.0docker push reg.westos/calico/cni:v3.25.0docker push reg.westos/calico/node:v3.25.0

上传镜像到harbor

部署calico

kubectl apply -f calico.yamlkubectl -n kube-system get pod -o wide

重启所有集群节点,让pod重新分配IP

等待集群重启正常后测试网络

curl  myapp.westos

网络策略

限制pod流量

vim networkpolicy.yaml
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: test-network-policynamespace: defaultspec:podSelector:matchLabels:app: myapp-v1policyTypes:- Ingressingress:- from:- podSelector:matchLabels:role: testports:- protocol: TCPport: 80
kubectl apply -f networkpolicy.yamlkubectl describe networkpolicies

控制的对象是具有app=myapp-v1标签的pod

kubectl get pod --show-labels

此时访问svc是不通的

kubectl get svc

kubectl run demo --image busyboxplus -it --rm/ # curl 10.111.43.137

给测试pod添加指定标签后,可以访问

kubectl  get pod --show-labels

kubectl label pod demo role=testkubectl  get pod --show-labels

/ # curl 10.111.43.137

限制namespace流量

vim networkpolicy.yaml
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: test-network-policynamespace: defaultspec:podSelector:matchLabels:app: myapppolicyTypes:- Ingressingress:- from:- namespaceSelector:matchLabels:project: test- podSelector:matchLabels:role: testports:- protocol: TCPport: 80
kubectl apply -f networkpolicy.yamlkubectl describe networkpolicies

kubectl create namespace test

给namespace添加指定标签

kubectl label ns test project=testkubectl get ns test --show-labels

kubectl -n test run demo --image busyboxplus -it --rm/ # curl 10.111.43.137  

同时限制namespace和pod

vim networkpolicy.yaml
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: test-network-policynamespace: defaultspec:podSelector:matchLabels:app: myapppolicyTypes:- Ingressingress:- from:- namespaceSelector:matchLabels:project: testpodSelector:matchLabels:role: testports:- protocol: TCPport: 80
kubectl apply -f networkpolicy.yamlkubectl describe networkpolicies

给test命令空间中的pod添加指定标签后才能访问

[root@k8s2 calico]# kubectl -n test label pod demo role=test[root@k8s2 calico]# kubectl -n test get pod --show-labels

限制集群外部流量

vim networkpolicy.yaml
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: test-network-policynamespace: defaultspec:podSelector:matchLabels:app: myapppolicyTypes:- Ingressingress:- from:- ipBlock:cidr: 192.168.56.0/24- namespaceSelector:matchLabels:project: myprojectpodSelector:matchLabels:role: frontendports:- protocol: TCPport: 80
kubectl apply -f networkpolicy.yamlkubectl describe networkpolicies

kubectl get svc

curl 192.168.92.101

更多推荐

kubernetes集群编排——service微服务

本文发布于:2023-11-30 12:40:24,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1650237.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:集群   kubernetes   service

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!