a37.ansible 生产实战案例

编程入门 行业动态 更新时间:2024-10-11 19:18:57

a37.ansible 生产<a href=https://www.elefans.com/category/jswz/34/1769775.html style=实战案例"/>

a37.ansible 生产实战案例

源码下载地址:

1.高可用Kubernetes集群规划

角色机器名机器配置ip地址安装软件
ansibleansible-server.example.local2C2G172.31.3.100ansible
master1k8s-master01.example.local2C4G172.31.3.101chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl
master2k8s-master02.example.local2C4G172.31.3.102chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl
master3k8s-master03.example.local2C4G172.31.3.103chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl
ha1k8s-ha01.example.local2C2G172.31.3.104 172.31.3.188(vip)chrony-server、haproxy、keepalived
ha2k8s-ha02.example.local2C2G172.31.3.105chrony-server、haproxy、keepalived
harbor1k8s-harbor01.example.local2C2G172.31.3.106chrony-client、docker、docker-compose、harbor
harbor2k8s-harbor02.example.local2C2G172.31.3.107chrony-client、docker、docker-compose、harbor
etcd1k8s-etcd01.example.local2C2G172.31.3.108chrony-client、docker、etcd
etcd2k8s-etcd02.example.local2C2G172.31.3.109chrony-client、docker、etcd
etcd3k8s-etcd03.example.local2C2G172.31.3.110chrony-client、docker、etcd
node1k8s-node01.example.local2C4G172.31.3.111chrony-client、docker、kubelet、kube-proxy
node2k8s-node02.example.local2C4G172.31.3.112chrony-client、docker、kubelet、kube-proxy
node3k8s-node03.example.local2C4G172.31.3.113chrony-client、docker、kubelet、kube-proxy

软件版本信息和Pod、Service网段规划:

配置信息备注
支持的操作系统版本CentOS 7.9/stream 8、Rocky 8、Ubuntu 18.04/20.04
Docker版本20.10.14
Containerd版本1.15.11
kubernetes版本1.23.6
Pod网段192.168.0.0/12
Service网段10.96.0.0/12

2.安装ansible和配置

2.1 安装ansible

#CentOS
[root@ansible-server ~]# yum -y install ansible[root@ansible-server ~]# ansible --version
ansible 2.9.25config file = /data/ansible/ansible.cfgconfigured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']ansible python module location = /usr/lib/python2.7/site-packages/ansibleexecutable location = /usr/bin/ansiblepython version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]#ubuntu18.04安装最新版的ansible
root@ubuntu1804:~# apt updateroot@ubuntu1804:~# apt -y install software-properties-commonroot@ubuntu1804:~# apt-add-repository --yes --update ppa:ansible/ansibleroot@ubuntu1804:~# apt -y install ansible
root@ubuntu1804:~# ansible --version
ansible 2.9.27config file = /etc/ansible/ansible.cfgconfigured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']ansible python module location = /usr/lib/python2.7/dist-packages/ansibleexecutable location = /usr/bin/ansiblepython version = 2.7.17 (default, Feb 27 2021, 15:10:58) [GCC 7.5.0]#ubuntu 20.04安装
[root@ubuntu ~]# apt -y install ansible

2.2 配置ansible

[root@ansible-server ~]# mkdir /data/ansible
[root@ansible-server ~]# cd /data/ansible[root@ansible-server ansible]# vim ansible.cfg
[defaults]
inventory      = ./inventory
forks          = 10
roles_path    = ./roles
remote_user = root#下面的IP根据自己的k8s集群主机规划设置
[root@ansible-server ansible]# vim inventory 
[master]
172.31.3.101 hname=k8s-master01
172.31.3.102 hname=k8s-master02
172.31.3.103 hname=k8s-master03[ha]
172.31.3.104 hname=k8s-ha01
172.31.3.105 hname=k8s-ha02[harbor]
172.31.3.106 hname=k8s-harbor01
172.31.3.107 hname=k8s-harbor02[etcd]
172.31.3.108 hname=k8s-etcd01
172.31.3.109 hname=k8s-etcd02
172.31.3.110 hname=k8s-etcd03[node]
172.31.3.111 hname=k8s-node01
172.31.3.112 hname=k8s-node02
172.31.3.113 hname=k8s-node03[all:vars]
domain=example.local[k8s_cluster:children]
master
node[chrony_server:children]
ha[chrony_client:children]
master
node
harbor
etcd[keepalives_master]
172.31.3.104[keepalives_backup]
172.31.3.105[haproxy:children]
ha[master01]
172.31.3.101

3.设置客户端网卡名和ip

#rocky8和centos系统设置
[root@172 ~]# bash reset.sh ************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 11
Rocky 8.5 网卡名已修改成功,请重新启动系统后才能生效!************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 12
请输入IP地址:172.31.0.101
IP 172.31.0.101  available!
请输入子网掩码位数:21
请输入网关地址:172.31.0.2
IP 172.31.0.2  available!
Rocky 8.5 IP地址和网关地址已修改成功,请重新启动系统后生效!************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 21#ubuntu系统设置
[C:\~]$ ssh raymond@172.31.7.3Connecting to 172.31.7.3:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-156-generic x86_64)* Documentation:  * Management:     * Support:         information as of Mon Dec 27 13:56:42 CST 2021System load:  0.17              Processes:            193Usage of /:   2.1% of 91.17GB   Users logged in:      1Memory usage: 10%               IP address for ens33: 172.31.7.3Swap usage:   0%* Super-optimized for small spaces - read how we shrank the memoryfootprint of MicroK8s to make it the smallest full K8s around. updates can be applied immediately.
18 of these updates are standard security updates.
To see these additional updates run: apt list --upgradableNew release '20.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.Last login: Mon Dec 27 13:56:31 2021
/usr/bin/xauth:  file /home/raymond/.Xauthority does not exist
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.raymond@ubuntu1804:~$ bash reset.sh ************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 18
请输入密码: 123456
[sudo] password for raymond: Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully
Ubuntu 18.04 root用户登录已设置完成,请重新登录后生效!************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 21
raymond@ubuntu1804:~$ exit
logoutConnection closed.Disconnected from remote host(172.31.7.3:22) at 13:57:16.Type `help' to learn how to use Xshell prompt.[C:\~]$ ssh root@172.31.7.3Connecting to 172.31.7.3:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-156-generic x86_64)* Documentation:  * Management:     * Support:         information as of Mon Dec 27 13:57:47 CST 2021System load:  0.06              Processes:            199Usage of /:   2.1% of 91.17GB   Users logged in:      1Memory usage: 11%               IP address for ens33: 172.31.7.3Swap usage:   0%* Super-optimized for small spaces - read how we shrank the memoryfootprint of MicroK8s to make it the smallest full K8s around. updates can be applied immediately.
18 of these updates are standard security updates.
To see these additional updates run: apt list --upgradableNew release '20.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law./usr/bin/xauth:  file /root/.Xauthority does not exist
root@ubuntu1804:~# mv /home/raymond/reset.sh .
root@ubuntu1804:~# bash reset.sh ************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 11
Ubuntu 18.04 网卡名已修改成功,请重新启动系统后才能生效!************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 12
请输入IP地址:172.31.0.103
IP 172.31.0.103  available!
请输入子网掩码位数:21
请输入网关地址:172.31.0.2
IP 172.31.0.2  available!
Ubuntu 18.04 IP地址和网关地址已修改成功,请重新启动系统后生效!************************************************************
*                      初始化脚本菜单                      *
* 1.禁用SELinux               12.修改IP地址和网关地址      *
* 2.关闭防火墙                13.设置主机名                *
* 3.优化SSH                   14.设置PS1和系统环境变量     *
* 4.设置系统别名              15.禁用SWAP                  *
* 5.1-4全设置                 16.优化内核参数              *
* 6.设置vimrc配置文件         17.优化资源限制参数          *
* 7.设置软件包仓库            18.Ubuntu设置root用户登录    *
* 8.Minimal安装建议安装软件   19.Ubuntu卸载无用软件包      *
* 9.安装邮件服务并配置邮件    20.重启系统                  *
* 10.更改SSH端口号            21.退出                      *
* 11.修改网卡名                                            *
************************************************************请选择相应的编号(1-21): 21

4.实现基于key验证的脚本

#下面的IP根据自己的k8s集群主机规划设置
[root@ansible-server ansible]# cat ssh_key.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2021-12-20
#FileName:      ssh_key.sh
#URL:           raymond.blog.csdn
#Description:   ssh_key for CentOS 7/8 & Ubuntu 18.04/24.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
IP=`ip addr show ${NET_NAME}| awk -F" +|/" '/global/{print $3}'`
export SSHPASS=123456
HOSTS="
172.31.3.101
172.31.3.102
172.31.3.103
172.31.3.104
172.31.3.105
172.31.3.106
172.31.3.107
172.31.3.108
172.31.3.109
172.31.3.110
172.31.3.111
172.31.3.112
172.31.3.113"os(){OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}ssh_key_push(){rm -f ~/.ssh/id_rsa*ssh-keygen -f /root/.ssh/id_rsa -P '' &> /dev/nullif [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;thenrpm -q sshpass &> /dev/null || { ${COLOR}"安装sshpass软件包"${END};yum -y install sshpass &> /dev/null; }elsedpkg -S sshpass &> /dev/null || { ${COLOR}"安装sshpass软件包"${END};apt -y install sshpass &> /dev/null; }fisshpass -e ssh-copy-id -o StrictHostKeyChecking=no ${IP} &> /dev/null[ $? -eq 0 ] && echo ${IP} is finished || echo ${IP} is falsefor i in ${HOSTS};dosshpass -e scp -o StrictHostKeyChecking=no -r /root/.ssh root@${i}: &> /dev/null[ $? -eq 0 ] && echo ${i} is finished || echo ${i} is falsedonefor i in ${HOSTS};doscp /root/.ssh/known_hosts ${i}:.ssh/ &> /dev/null[ $? -eq 0 ] && echo ${i} is finished || echo ${i} is falsedone
}main(){osssh_key_push
}main[root@ansible-server ansible]# bash ssh_key.sh 
172.31.3.100 is finished
172.31.3.101 is finished
172.31.3.102 is finished
172.31.3.103 is finished
172.31.3.104 is finished
172.31.3.105 is finished
172.31.3.106 is finished
172.31.3.107 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.110 is finished
172.31.3.111 is finished
172.31.3.112 is finished
172.31.3.113 is finished
172.31.3.101 is finished
172.31.3.102 is finished
172.31.3.103 is finished
172.31.3.104 is finished
172.31.3.105 is finished
172.31.3.106 is finished
172.31.3.107 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.110 is finished
172.31.3.111 is finished
172.31.3.112 is finished
172.31.3.113 is finished

5.系统初始化和安装软件包

5.1 系统初始化

[root@ansible-server ansible]# mkdir -p roles/reset/{tasks,templates,vars}[root@ansible-server ansible]# cd roles/reset/
[root@ansible-server reset]# ls
tasks  templates  vars[root@ansible-server reset]# vim templates/yum8.repo.j2 
[BaseOS]
name=BaseOS
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/rocky/$releasever/BaseOS/$basearch/os/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/centos/$releasever-stream/BaseOS/$basearch/os/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{% endif %}[AppStream]
name=AppStream
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/rocky/$releasever/AppStream/$basearch/os/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/centos/$releasever-stream/AppStream/$basearch/os/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{% endif %}[extras]
name=extras
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/rocky/$releasever/extras/$basearch/os/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/centos/$releasever-stream/extras/$basearch/os/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{% endif %}{% if ansible_distribution =="Rocky" %}
[plus]
{% elif ansible_distribution=="CentOS" %}
[centosplus]
{% endif %}
{% if ansible_distribution =="Rocky" %}
name=plus
{% elif ansible_distribution=="CentOS" %}
name=centosplus
{% endif %}
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/rocky/$releasever/plus/$basearch/os/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/centos/$releasever-stream/centosplus/$basearch/os/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{% endif %}[PowerTools]
name=PowerTools
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/rocky/$releasever/PowerTools/$basearch/os/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/centos/$releasever-stream/PowerTools/$basearch/os/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{% endif %}[epel]
name=epel
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/fedora/epel/$releasever/Everything/$basearch/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/epel/$releasever/Everything/$basearch/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=https://{{ ROCKY_URL }}/fedora/epel/RPM-GPG-KEY-EPEL-$releasever
{% elif ansible_distribution=="CentOS" %}
gpgkey=https://{{ URL }}/epel/RPM-GPG-KEY-EPEL-$releasever
{% endif %}[root@ansible-server reset]# vim templates/yum7.repo.j2 
[base]
name=base
baseurl=https://{{ URL }}/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever[extras]
name=extras
baseurl=https://{{ URL }}/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever[updates]
name=updates
baseurl=https://{{ URL }}/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever[centosplus]
name=centosplus
baseurl=https://{{ URL }}/centos/$releasever/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever[epel]
name=epel
baseurl=https://{{ URL }}/epel/$releasever/$basearch/
gpgcheck=1
gpgkey=https://{{ URL }}/epel/RPM-GPG-KEY-EPEL-$releasever[root@ansible-server reset]#  vim templates/apt.list.j2 
deb http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }} main restricted universe multiverse
deb-src http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }} main restricted universe multiversedeb http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-security main restricted universe multiverse
deb-src http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-security main restricted universe multiversedeb http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-updates main restricted universe multiverse
deb-src http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-updates main restricted universe multiversedeb http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-proposed main restricted universe multiverse
deb-src http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-proposed main restricted universe multiversedeb http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-backports main restricted universe multiverse
deb-src http://{{ URL }}/ubuntu/ {{ ansible_distribution_release }}-backports main restricted universe multiverse#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址,HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server reset]# vim vars/main.yml
VIP: 172.31.3.188
HARBOR_DOMAIN: harbor.raymonds
ROCKY_URL: mirrors.ustc.edu
URL: mirrors.cloud.tencent[root@ansible-server reset]# vim tasks/set_hostname.yml
- name: set hostnamehostname:name: "{{ hname }}.{{ domain }}"[root@ansible-server reset]# vim tasks/set_hosts.yml
- name: set hosts filelineinfile:path: "/etc/hosts"line: "{{ item }} {{hostvars[item].ansible_hostname}}.{{ domain }} {{hostvars[item].ansible_hostname}}"loop:"{{ play_hosts }}"
- name: set hosts file2lineinfile:path: "/etc/hosts"line: "{{ item }}"loop:- "{{ VIP }} k8s-lb"- "{{ VIP }} {{ HARBOR_DOMAIN }}"[root@ansible-server reset]# vim tasks/disable_selinux.yml
- name: disable selinuxreplace:path: /etc/sysconfig/selinuxregexp: '^(SELINUX=).*'replace: '\1disabled'when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")[root@ansible-server reset]# vim tasks/disable_firewall.yml
- name: disable firewallsystemd:name: firewalldstate: stoppedenabled: nowhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: disable ufwsystemd:name: ufwstate: stoppedenabled: nowhen:- ansible_distribution=="Ubuntu"[root@ansible-server reset]# vim tasks/disable_networkmanager.yml
- name: disable NetworkManagersystemd:name: NetworkManagerstate: stoppedenabled: nowhen:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"[root@ansible-server reset]# vim tasks/disable_swap.yml
- name: disable swapreplace:path: /etc/fstabregexp: '^(.*swap.*)'replace: '#\1'
- name: get sd numbershell:cmd: lsblk|awk -F"[ └─]" '/SWAP/{printf $3}'register: SD_NAMEwhen:- ansible_distribution=="Ubuntu"- ansible_distribution_major_version=="20"
- name: disable swap for ubuntu20shell:cmd: systemctl mask dev-{{ SD_NAME.stdout}}.swapwhen:- ansible_distribution=="Ubuntu"- ansible_distribution_major_version=="20"[root@ansible-server reset]# vim tasks/set_limits.yml
- name: set limitshell:cmd: ulimit -SHn 65535
- name: set limits.conf filelineinfile:path: "/etc/security/limits.conf"line: "{{ item }}"loop:- "* soft nofile 655360"- "* hard nofile 131072"- "* soft nproc 655350"- "* hard nproc 655350"- "* soft memlock unlimited"- "* hard memlock unlimited" [root@ansible-server reset]# vim tasks/optimization_sshd.yml
- name: optimization sshd disable UseDNSreplace:path: /etc/ssh/sshd_configregexp: '^#(UseDNS).*'replace: '\1 no'
- name: optimization sshd diaable CentOS or Rocky GSSAPIAuthenticationreplace:path: /etc/ssh/sshd_configregexp: '^(GSSAPIAuthentication).*'replace: '\1 no'when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: optimization sshd diaable Ubuntu GSSAPIAuthenticationreplace:path: /etc/ssh/sshd_configregexp: '^#(GSSAPIAuthentication).*'replace: '\1 no'notify:- restart sshdwhen:- ansible_distribution=="Ubuntu"[root@ansible-server reset]# vim tasks/set_alias.yml
- name: set CentOS or Rocky aliaslineinfile:path: ~/.bashrcline: "{{ item }}"loop:- "alias cdnet=\"cd /etc/sysconfig/network-scripts\""- "alias vie0=\"vim /etc/sysconfig/network-scripts/ifcfg-eth0\""- "alias vie1=\"vim /etc/sysconfig/network-scripts/ifcfg-eth1\""- "alias scandisk=\"echo '- - -' > /sys/class/scsi_host/host0/scan;echo '- - -' > /sys/class/scsi_host/host1/scan;echo '- - -' > /sys/class/scsi_host/host2/scan\""when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: set Ubuntu aliaslineinfile:path: ~/.bashrcline: "{{ item }}"loop:- "alias cdnet=\"cd /etc/netplan\""- "alias scandisk=\"echo '- - -' > /sys/class/scsi_host/host0/scan;echo '- - -' > /sys/class/scsi_host/host1/scan;echo '- - -' > /sys/class/scsi_host/host2/scan\""when:- ansible_distribution=="Ubuntu"[root@ansible-server reset]# vim tasks/set_mirror.yml
- name: find CentOS or Rocky repo filesfind:paths: /etc/yum.repos.d/patterns: "*.repo"register: FILENAMEwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete CentOS or Rocky repo filesfile:path: "{{ item.path }}"state: absentwith_items: "{{ FILENAME.files }}"when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: set CentOS8 or Rocky8 Mirror warehousetemplate:src: yum8.repo.j2dest: /etc/yum.repos.d/base.repowhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_distribution_major_version=="8"
- name: set CentOS7 Mirror warehousetemplate:src: yum7.repo.j2dest: /etc/yum.repos.d/base.repowhen:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: set Ubuntu Mirror warehousetemplate:src: apt.list.j2dest: /etc/apt/sources.listwhen:- ansible_distribution=="Ubuntu"
- name: delete lock filesfile:path: "{{ item }}"state: absentloop:- /var/lib/dpkg/lock- /var/lib/apt/lists/lock- /var/cache/apt/archives/lockwhen:- ansible_distribution=="Ubuntu"
- name: apt updateapt:update_cache: yes force: yes when:- ansible_distribution=="Ubuntu"[root@ansible-server reset]# vim tasks/main.yml
- include: set_hostname.yml
- include: set_hosts.yml
- include: disable_selinux.yml
- include: disable_firewall.yml
- include: disable_networkmanager.yml
- include: disable_swap.yml
- include: set_limits.yml
- include: optimization_sshd.yml
- include: set_alias.yml
- include: set_mirror.yml[root@ansible-server reset]# cd ../../
[root@ansible-server ansible]# tree roles/reset/
[root@ansible-server ansible]# tree roles/reset/
roles/reset/
├── tasks
│   ├── disable_firewall.yml
│   ├── disable_networkmanager.yml
│   ├── disable_selinux.yml
│   ├── disable_swap.yml
│   ├── main.yml
│   ├── optimization_sshd.yml
│   ├── set_alias.yml
│   ├── set_hostname.yml
│   ├── set_hosts.yml
│   ├── set_limits.yml
│   └── set_mirror.yml
├── templates
│   ├── apt.list.j2
│   ├── yum7.repo.j2
│   └── yum8.repo.j2
└── vars└── main.yml3 directories, 15 files[root@ansible-server ansible]# vim reset_role.yml
---
- hosts: allroles:- role: reset[root@ansible-server ansible]# ansible-playbook reset_role.yml 

5.2 安装软件包

[root@ansible-server ansible]# mkdir -p roles/reset-installpackage/{files,tasks}[root@ansible-server ansible]# cd roles/reset-installpackage/
[root@ansible-server reset-installpackage]# ls
files  tasks[root@ansible-server reset-installpackage]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -P files/[root@ansible-server reset-installpackage]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -P files/[root@ansible-server reset-installpackage]# vim files/ge4.18_ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip[root@ansible-server reset-installpackage]# vim files/lt4.18_ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip[root@ansible-server reset-installpackage]# vim files/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384[root@ansible-server reset-installpackage]# vim tasks/install_package.yml
- name: install Centos or Rocky packageyum:name: vim,tree,lrzsz,wget,jq,psmisc,net-tools,telnet,gitwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: install Centos8 or Rocky8 packageyum:name: rsyncwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_distribution_major_version=="8"
- name: install Ubuntu packageapt:name: tree,lrzsz,jqforce: yes when:- ansible_distribution=="Ubuntu"[root@ansible-server reset-installpackage]# vim tasks/set_centos7_kernel.yml
- name: update CentOS7yum:name: '*'state: latestexclude: kernel*when:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: copy CentOS7 kernel filescopy: src: "{{ item }}"dest: /tmploop:- kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm- kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpmwhen:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: Finding RPM files find: paths: "/tmp" patterns: "*.rpm" register: RPM_RESULTwhen:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: Install RPM yum: name: "{{ item.path }}" with_items: "{{ RPM_RESULT.files }}" when:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: delete kernel filesfile:path: "{{ item.path }}"state: absent with_items: "{{ RPM_RESULT.files }}" when:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: set grubshell:cmd: grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg; grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"when:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"[root@ansible-server reset-installpackage]# vim tasks/install_ipvsadm.yml
- name: install CentOS or Rocky ipvsadmyum:name: ipvsadm,ipset,sysstat,conntrack,libseccompwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- inventory_hostname in groups.k8s_cluster
- name: install Ubuntu ipvsadmapt:name: ipvsadm,ipset,sysstat,conntrack,libseccomp-devforce: yes when:- ansible_distribution=="Ubuntu"- inventory_hostname in groups.k8s_cluster[root@ansible-server reset-installpackage]# vim tasks/set_ipvs.yml
- name: configuration load_modshell:cmd: |modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shwhen:- inventory_hostname in groups.k8s_cluster
- name: configuration load_mod kernel ge4.18shell:cmd: modprobe -- nf_conntrackwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") or (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="20")- inventory_hostname in groups.k8s_cluster
- name: configuration load_mod kernel lt4.18shell:cmd: modprobe -- nf_conntrack_ipv4when:- (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="18")- inventory_hostname in groups.k8s_cluster
- name: Copy ge4.18_ipvs.conf filecopy: src: ge4.18_ipvs.confdest: /etc/modules-load.d/ipvs.confwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") or (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="20")- inventory_hostname in groups.k8s_cluster
- name: Copy lt4.18_ipvs.conf filecopy: src: lt4.18_ipvs.confdest: /etc/modules-load.d/ipvs.confwhen:- (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="18")- inventory_hostname in groups.k8s_cluster
- name: start systemd-modules-load service systemd:name: systemd-modules-loadstate: startedenabled: yeswhen:- inventory_hostname in groups.k8s_cluster[root@ansible-server reset-installpackage]# vim tasks/set_k8s_kernel.yml
- name: copy k8s.conf filecopy: src: k8s.confdest: /etc/sysctl.d/
- name: Load kernel configshell:cmd: "sysctl --system"[root@ansible-server reset-installpackage]# vim tasks/reboot_system.yml
- name: reboot systemreboot:[root@ansible-server reset-installpackage]# vim tasks/main.yml
- include: install_package.yml
- include: set_centos7_kernel.yml
- include: install_ipvsadm.yml
- include: set_ipvs.yml
- include: set_k8s_kernel.yml
- include: reboot_system.yml[root@ansible-server reset-installpackage]# cd ../../
[root@ansible-server ansible]# tree roles/reset-installpackage/
roles/reset-installpackage/
├── files
│   ├── ge4.18_ipvs.conf
│   ├── k8s.conf
│   ├── kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
│   ├── kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
│   └── lt4.18_ipvs.conf
└── tasks├── install_ipvsadm.yml├── install_package.yml├── main.yml├── reboot_system.yml├── set_centos7_kernel.yml├── set_ipvs.yml└── set_k8s_kernel.yml2 directories, 12 files[root@ansible-server ansible]# vim reset_installpackage_role.yml 
---
- hosts: allserial: 3roles:- role: reset-installpackage[root@ansible-server ansible]# ansible-playbook reset_installpackage_role.yml 

6.chrony

6.1 chrony-server

[root@ansible-server ansible]# mkdir -p roles/chrony-server/{tasks,handlers}[root@ansible-server ansible]# cd roles/chrony-server/
[root@ansible-server chrony-server]# ls
handlers  tasks[root@ansible-server chrony-server]# vim tasks/install_chrony_yum.yml
- name: install CentOS or Rocky chronyyum:name: chronywhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^pool.*' string linelineinfile:path: /etc/chrony.confregexp: '^pool.*'state: absentwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^server.*' string linelineinfile:path: /etc/chrony.confregexp: '^server.*'state: absentwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd
- name: add Time server for CentOS or Rocky /etc/chrony.conf filelineinfile:path: /etc/chrony.confinsertafter: '^# Please consider .*'line: "server ntp.aliyun iburst\nserver time1.cloud.tencent iburst\nserver ntp.tuna.tsinghua.edu iburst"when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd
- name: Substitution '^#(allow).*' string for CentOS or Rocky /etc/chrony.conf filereplace:path: /etc/chrony.confregexp: '^#(allow).*'replace: '\1 0.0.0.0/0'when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd
- name: Substitution '^#(local).*' string for CentOS or Rocky /etc/chrony.conf filereplace:path: /etc/chrony.confregexp: '^#(local).*'replace: '\1 stratum 10'when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd[root@ansible-server chrony-server]# vim tasks/install_chrony_apt.yml
- name: delete lock filesfile:path: "{{ item }}"state: absentloop:- /var/lib/dpkg/lock- /var/lib/apt/lists/lock- /var/cache/apt/archives/lockwhen:- ansible_distribution=="Ubuntu"
- name: apt updateapt:update_cache: yesforce: yes when:- ansible_distribution=="Ubuntu"
- name: install Ubuntu chronyapt:name: chronyforce: yeswhen:- ansible_distribution=="Ubuntu"
- name: delete Ubuntu /etc/chrony/chrony.conf file contains '^pool.*' string linelineinfile:path: /etc/chrony/chrony.confregexp: '^pool.*'state: absentwhen:- ansible_distribution=="Ubuntu"notify:- restart chronyd
- name: add Time server for Ubuntu /etc/chrony/chrony.conf filelineinfile:path: /etc/chrony/chrony.confinsertafter: '^# See http:.*'line: "server ntp.aliyun iburst\nserver time1.cloud.tencent iburst\nserver ntp.tuna.tsinghua.edu iburst"when:- ansible_distribution=="Ubuntu"
- name: add 'allow 0.0.0.0/0' string and 'local stratum 10' string for Ubuntu /etc/chrony/chrony.conf filelineinfile:path: /etc/chrony/chrony.confline: "{{ item }}"loop:- "allow 0.0.0.0/0"- "local stratum 10"when:- ansible_distribution=="Ubuntu"notify:- restart chronyd[root@ansible-server chrony-server]# vim tasks/service.yml
- name: start chronydsystemd:name: chronydstate: startedenabled: yes[root@ansible-server chrony-server]# vim tasks/main.yml
- include: install_chrony_yum.yml
- include: install_chrony_apt.yml
- include: service.yml[root@ansible-server chrony-server]# vim handlers/main.yml
- name: restart chronydsystemd:name: chronydstate: restarted[root@ansible-server chrony-server]# cd ../../
[root@ansible-server ansible]# tree roles/chrony-server/
roles/chrony-server/
├── handlers
│   └── main.yml
└── tasks├── install_chrony_apt.yml├── install_chrony_yum.yml├── main.yml└── service.yml2 directories, 5 files[root@ansible-server ansible]# vim chrony_server_role.yml 
---
- hosts: chrony_serverroles:- role: chrony-server[root@ansible-server ansible]# ansible-playbook chrony_server_role.yml[root@k8s-ha01 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^- 203.107.6.88                  2   6    37    62    -15ms[  -15ms] +/-   35ms
^* 139.199.215.251               2   6    37    62    -10us[+1488us] +/-   37ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-    0ns[root@k8s-ha02 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    77     3  -4058us[+2582us] +/-   31ms
^+ 139.199.215.251               2   6    77     2  +6881us[+6881us] +/-   33ms
^? 101.6.6.172                   0   7     0     -     +0ns[   +0ns] +/-    0ns

6.2 chrony-client

[root@ansible-server ansible]# mkdir -p roles/chrony-client/{tasks,handlers,vars}
[root@ansible-server ansible]# cd roles/chrony-client/
[root@ansible-server chrony-client]# ls
handlers  tasks  vars#下面IP设置成chrony-server的IP地址,SERVER1设置ha1的IP地址,SERVER2设置ha2的IP地址
[root@ansible-server chrony-client]# vim vars/main.yml
SERVER1: 172.31.3.104
SERVER2: 172.31.3.105[root@ansible-server chrony-client]# vim tasks/install_chrony_yum.yml
- name: install CentOS or Rocky chronyyum:name: chronywhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^pool.*' string linelineinfile:path: /etc/chrony.confregexp: '^pool.*'state: absentwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^server.*' string linelineinfile:path: /etc/chrony.confregexp: '^server.*'state: absentwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd
- name: add Time server for CentOS or Rocky /etc/chrony.conf filelineinfile:path: /etc/chrony.confinsertafter: '^# Please consider .*'line: "server {{ SERVER1 }} iburst\nserver {{ SERVER2 }} iburst"when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")notify:- restart chronyd[root@ansible-server chrony-client]# vim tasks/install_chrony_apt.yml
- name: delete lock filesfile:path: "{{ item }}"state: absentloop:- /var/lib/dpkg/lock- /var/lib/apt/lists/lock- /var/cache/apt/archives/lockwhen:- ansible_distribution=="Ubuntu"
- name: apt updateapt:update_cache: yesforce: yes when:- ansible_distribution=="Ubuntu"
- name: install Ubuntu chronyapt:name: chronyforce: yeswhen:- ansible_distribution=="Ubuntu"
- name: delete Ubuntu /etc/chrony/chrony.conf file contains '^pool.*' string linelineinfile:path: /etc/chrony/chrony.confregexp: '^pool.*'state: absentwhen:- ansible_distribution=="Ubuntu"notify:- restart chronyd
- name: add Time server for Ubuntu /etc/chrony/chrony.conf filelineinfile:path: /etc/chrony/chrony.confinsertafter: '^# See http:.*'line: "server {{ SERVER1 }} iburst\nserver {{ SERVER2 }} iburst"when:- ansible_distribution=="Ubuntu"notify:- restart chronyd[root@ansible-server chrony-client]# vim tasks/service.yml
- name: start chronydsystemd:name: chronydstate: startedenabled: yes[root@ansible-server chrony-client]# vim tasks/main.yml
- include: install_chrony_yum.yml
- include: install_chrony_apt.yml
- include: service.yml[root@ansible-server chrony-client]# vim handlers/main.yml
- name: restart chronydsystemd:name: chronydstate: restarted[root@ansible-server chrony-client]# cd ../../
[root@ansible-server ansible]# tree roles/chrony-client/
roles/chrony-client/
├── handlers
│   └── main.yml
├── tasks
│   ├── install_chrony_apt.yml
│   ├── install_chrony_yum.yml
│   ├── main.yml
│   └── service.yml
└── vars└── main.yml3 directories, 6 files[root@ansible-server ansible]# vim chrony_client_role.yml
---
- hosts: chrony_clientroles:- role: chrony-client[root@ansible-server ansible]# ansible-playbook chrony_client_role.yml[root@k8s-master01 ~]# chronyc sources -nv
210 Number of sources = 2
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* k8s-ha01                      3   6    17    28    -57us[  -29us] +/-   31ms
^+ k8s-ha02                      3   6    17    29   +204us[ +231us] +/-   34ms

7.haproxy

[root@ansible-server ansible]# mkdir -p roles/haproxy/{tasks,vars,files,templates}
[root@ansible-server ansible]# cd roles/haproxy/
[root@ansible-server haproxy]# ls
files  tasks  templates  vars[root@ansible-server haproxy]# wget .4.3.tar.gz -P files/
[root@ansible-server haproxy]# wget .4/src/haproxy-2.4.10.tar.gz -P files/[root@ansible-server haproxy]# vim files/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID[Install]
WantedBy=multi-user.target#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server haproxy]# vim vars/main.yml
SRC_DIR: /usr/local/src
LUA_FILE: lua-5.4.3.tar.gz
HAPROXY_FILE: haproxy-2.4.10.tar.gz
HAPROXY_INSTALL_DIR: /apps/haproxy
STATS_AUTH_USER: admin
STATS_AUTH_PASSWORD: 123456
VIP: 172.31.3.188[root@ansible-server haproxy]# vim templates/haproxy.cfg.j2
global
maxconn 100000
chroot {{ HAPROXY_INSTALL_DIR }}
stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
pidfile /var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 infodefaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000mslisten statsmode httpbind 0.0.0.0:9999stats enablelog globalstats uri /haproxy-statusstats auth {{ STATS_AUTH_USER }}:{{ STATS_AUTH_PASSWORD }}listen kubernetes-6443bind {{ VIP }}:6443mode tcplog global{% for i in groups.master %}server {{ i }} {{ i }}:6443 check inter 3s fall 2 rise 5{% endfor %}listen harbor-80bind {{ VIP }}:80mode httplog globalbalance source{% for i in groups.harbor %}server {{ i }} {{ i }}:80 check inter 3s fall 2 rise 5{% endfor %}[root@ansible-server haproxy]# vim tasks/install_package.yml
- name: install CentOS or Rocky depend on the packageyum:name: gcc,make,gcc-c++,glibc,glibc-devel,pcre,pcre-devel,openssl,openssl-devel,systemd-devel,libtermcap-devel,ncurses-devel,libevent-devel,readline-develwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- inventory_hostname in groups.haproxy
- name: delete lock filesfile:path: "{{ item }}"state: absentloop:- /var/lib/dpkg/lock- /var/lib/apt/lists/lock- /var/cache/apt/archives/lockwhen:- ansible_distribution=="Ubuntu"- inventory_hostname in groups.haproxy
- name: apt updateapt:update_cache: yes force: yes when:- ansible_distribution=="Ubuntu"- inventory_hostname in groups.haproxy
- name: install Ubuntu depend on the packageapt:name: gcc,make,openssl,libssl-dev,libpcre3,libpcre3-dev,zlib1g-dev,libreadline-dev,libsystemd-devforce: yes when:- ansible_distribution=="Ubuntu"- inventory_hostname in groups.haproxy[root@ansible-server haproxy]# vim tasks/build_lua.yml
- name: unarchive lua packageunarchive:src: "{{ LUA_FILE }}"dest: "{{ SRC_DIR }}"when:- inventory_hostname in groups.haproxy
- name: get LUA_DIR directoryshell:cmd: echo {{ LUA_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'register: LUA_DIRwhen:- inventory_hostname in groups.haproxy
- name: Build and install luashell: chdir: "{{ SRC_DIR }}/{{ LUA_DIR.stdout }}"cmd: make all testwhen:- inventory_hostname in groups.haproxy[root@ansible-server haproxy]# vim tasks/build_haproxy.yml
- name: unarchive haproxy packageunarchive:src: "{{ HAPROXY_FILE }}"dest: "{{ SRC_DIR }}"when:- inventory_hostname in groups.haproxy
- name: get HAPROXY_DIR directoryshell:cmd: echo {{ HAPROXY_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'register: HAPROXY_DIRwhen:- inventory_hostname in groups.haproxy
- name: make Haproxyshell: chdir: "{{ SRC_DIR }}/{{ HAPROXY_DIR.stdout }}"cmd: make -j {{ ansible_processor_vcpus }} ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC={{ SRC_DIR }}/{{ LUA_DIR.stdout }}/src/ LUA_LIB={{ SRC_DIR }}/{{ LUA_DIR.stdout }}/src/ PREFIX={{ HAPROXY_INSTALL_DIR }}when:- inventory_hostname in groups.haproxy
- name: make install Haproxyshell: chdir: "{{ SRC_DIR }}/{{ HAPROXY_DIR.stdout }}"cmd: make install PREFIX={{ HAPROXY_INSTALL_DIR }}when:- inventory_hostname in groups.haproxy[root@ansible-server haproxy]# vim tasks/config.yml
- name: copy haproxy.service filecopy:src: haproxy.servicedest: /lib/systemd/systemwhen:- inventory_hostname in groups.haproxy
- name: create haproxy linkfile:src: "../..{{ HAPROXY_INSTALL_DIR }}/sbin/{{ item.src }}"dest: "/usr/sbin/{{ item.src }}"state: linkowner: rootgroup: rootmode: 755force: yes   with_items:- src: haproxywhen:- inventory_hostname in groups.haproxy
- name: create /etc/haproxy directoryfile:path: /etc/haproxystate: directorywhen:- inventory_hostname in groups.haproxy
- name: create /var/lib/haproxy/ directoryfile:path: /var/lib/haproxy/state: directorywhen:- inventory_hostname in groups.haproxy
- name: copy haproxy.cfg filetemplate:src: haproxy.cfg.j2dest: /etc/haproxy/haproxy.cfgwhen:- inventory_hostname in groups.haproxy
- name: Add the kernelsysctl:name: net.ipv4.ip_nonlocal_bindvalue: "1"when:- inventory_hostname in groups.haproxy
- name: PATH variablecopy:content: 'PATH={{ HAPROXY_INSTALL_DIR }}/sbin:$PATH'dest: /etc/profile.d/haproxy.shwhen:- inventory_hostname in groups.haproxy
- name: PATH variable entryshell:cmd: . /etc/profile.d/haproxy.shwhen:- inventory_hostname in groups.haproxy[root@ansible-server haproxy]# vim tasks/service.yml
- name: start haproxysystemd:name: haproxystate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.haproxy[root@ansible-server haproxy]# vim tasks/main.yml
- include: install_package.yml
- include: build_lua.yml
- include: build_haproxy.yml
- include: config.yml
- include: service.yml[root@ansible-server haproxy]# cd ../../
[root@ansible-server ansible]# tree roles/haproxy/
roles/haproxy/
├── files
│   ├── haproxy-2.4.10.tar.gz
│   ├── haproxy.service
│   └── lua-5.4.3.tar.gz
├── tasks
│   ├── build_haproxy.yml
│   ├── build_lua.yml
│   ├── config.yml
│   ├── install_package.yml
│   ├── main.yml
│   └── service.yml
├── templates
│   └── haproxy.cfg.j2
└── vars└── main.yml4 directories, 11 files[root@ansible-server ansible]# vim haproxy_role.yml
---
- hosts: haproxy:master:harborroles:- role: haproxy[root@ansible-server ansible]# ansible-playbook haproxy_role.yml

8.keepalived

8.1 keepalived-master

[root@ansible-server ansible]# mkdir -p roles/keepalived-master/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/keepalived-master/
[root@ansible-server keepalived-master]# ls
files  tasks  templates  vars[root@ansible-server keepalived-master]#  wget .2.4.tar.gz -P files/[root@ansible-server keepalived-master]# vim files/check_haproxy.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-09
#FileName:      check_haproxy.sh
#URL:           raymond.blog.csdn
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
err=0
for k in $(seq 1 3);docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server keepalived-master]# vim vars/main.yml
URL: mirrors.cloud.tencent
ROCKY_URL: mirrors.sjtug.sjtu.edu
KEEPALIVED_FILE: keepalived-2.2.4.tar.gz
SRC_DIR: /usr/local/src
KEEPALIVED_INSTALL_DIR: /apps/keepalived
STATE: MASTER
PRIORITY: 100
VIP: 172.31.3.188[root@ansible-server keepalived-master]# vim templates/PowerTools.repo.j2 
[PowerTools]
name=PowerTools
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/rocky/$releasever/PowerTools/$basearch/os/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/centos/$releasever/PowerTools/$basearch/os/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{% endif %}[root@ansible-server keepalived-master]# vim templates/keepalived.conf.j2
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVELscript_user rootenable_script_security
}vrrp_script check_haoroxy {script "/etc/keepalived/check_haproxy.sh"interval 5weight -5fall 2  rise 1
}vrrp_instance VI_1 {state {{ STATE }}interface {{ ansible_default_ipv4.interface }}virtual_router_id 51priority {{ PRIORITY }}advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {{{ VIP }} dev {{ ansible_default_ipv4.interface }} label {{ ansible_default_ipv4.interface }}:1}track_script {check_haproxy}
}[root@ansible-server keepalived-master]# vim tasks/install_package.yml
- name: find "[PowerTools]" mirror warehousefind:path: /etc/yum.repos.d/contains: '\[PowerTools\]'register: RETURNwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_distribution_major_version=="8"
- name: copy repo filetemplate:src: PowerTools.repo.j2dest: /etc/yum.repos.d/PowerTools.repowhen: - (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") and (ansible_distribution_major_version=="8") - RETURN.matched == 0
- name: install CentOS8 or Rocky8 depend on the packageyum:name: make,gcc,ipvsadm,autoconf,automake,openssl-devel,libnl3-devel,iptables-devel,ipset-devel,file-devel,net-snmp-devel,glib2-devel,pcre2-devel,libnftnl-devel,libmnl-devel,systemd-develwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_distribution_major_version=="8"
- name: install CentOS7 depend on the packageyum:name: make,gcc,libnfnetlink-devel,libnfnetlink,ipvsadm,libnl,libnl-devel,libnl3,libnl3-devel,lm_sensors-libs,net-snmp-agent-libs,net-snmp-libs,openssh-server,openssh-clients,openssl,openssl-devel,automake,iproutewhen:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: delete lock filesfile:path: "{{ item }}"state: absentloop:- /var/lib/dpkg/lock- /var/lib/apt/lists/lock- /var/cache/apt/archives/lockwhen:- ansible_distribution=="Ubuntu"
- name: apt updateapt:update_cache: yes force: yes when:- ansible_distribution=="Ubuntu"
- name: install Ubuntu 20.04 depend on the packageapt:name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-devforce: yes when:- ansible_distribution=="Ubuntu"- ansible_distribution_major_version=="20"
- name: install Ubuntu 18.04 depend on the packageapt:name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,iptables-dev,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-devforce: yes when:- ansible_distribution=="Ubuntu"- ansible_distribution_major_version=="18"[root@ansible-server keepalived-master]# vim tasks/keepalived_file.yml
- name: unarchive  keepalived packageunarchive:src: "{{ KEEPALIVED_FILE }}"dest: "{{ SRC_DIR }}"[root@ansible-server keepalived_master]# vim tasks/build.yml
- name: get KEEPALIVED_DIR directoryshell:cmd: echo {{ KEEPALIVED_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'register: KEEPALIVED_DIR
- name: Build and install Keepalivedshell: chdir: "{{ SRC_DIR }}/{{ KEEPALIVED_DIR.stdout }}"cmd: ./configure --prefix={{ KEEPALIVED_INSTALL_DIR }} --disable-fwmark
- name: make && make installshell:chdir: "{{ SRC_DIR }}/{{ KEEPALIVED_DIR.stdout }}"cmd: make -j {{ ansible_processor_vcpus }} && make install[root@ansible-server keepalived-master]# vim tasks/config.yml
- name: create /etc/keepalived directoryfile:path: /etc/keepalivedstate: directory
- name: copy keepalived.conf filetemplate:src: keepalived.conf.j2dest: /etc/keepalived/keepalived.conf
- name: copy check_haproxy.sh filecopy:src: check_haproxy.shdest: /etc/keepalived/mode: 0755
- name: copy keepalived.service filecopy:remote_src: Truesrc: "{{ SRC_DIR }}/{{ KEEPALIVED_DIR.stdout }}/keepalived/keepalived.service"dest: /lib/systemd/system/
- name: PATH variablecopy:content: 'PATH={{ KEEPALIVED_INSTALL_DIR }}/sbin:$PATH'dest: /etc/profile.d/keepalived.sh
- name: PATH variable entryshell:cmd: . /etc/profile.d/keepalived.sh[root@ansible-server keepalived-master]# vim tasks/service.yml
- name: start keepalivedsystemd:name: keepalivedstate: startedenabled: yesdaemon_reload: yes[root@ansible-server keepalived-master]# vim tasks/main.yml
- include: install_package.yml
- include: keepalived_file.yml
- include: build.yml
- include: config.yml
- include: service.yml[root@ansible-server keepalived-master]# cd ../../
[root@ansible-server ansible]# tree roles/keepalived-master/
roles/keepalived-master/
├── files
│   ├── check_haproxy.sh
│   └── keepalived-2.2.4.tar.gz
├── tasks
│   ├── build.yml
│   ├── config.yml
│   ├── install_package.yml
│   ├── keepalived_file.yml
│   ├── main.yml
│   └── service.yml
├── templates
│   ├── keepalived.conf.j2
│   └── PowerTools.repo.j2
└── vars└── main.yml4 directories, 11 files[root@ansible-server ansible]# vim keepalived_master_role.yml 
---
- hosts: keepalives_masterroles:- role: keepalived-master[root@ansible-server ansible]# ansible-playbook keepalived_master_role.yml 

8.2 keepalived-backup

[root@ansible-server ansible]# mkdir -p roles/keepalived-backup/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/keepalived-backup/
[root@ansible-server keepalived-master]# ls
files  tasks  templates  vars[root@ansible-server keepalived-backup]#  wget .2.4.tar.gz -P files/[root@ansible-server keepalived-backup]# vim files/check_haproxy.sh 
#!/bin/bash
#
#**********************************************************************************************
#Author:        Raymond
#QQ:            88563128
#Date:          2022-01-09
#FileName:      check_haproxy.sh
#URL:           raymond.blog.csdn
#Description:   The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
err=0
for k in $(seq 1 3);docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server keepalived-backup]# vim vars/main.yml
URL: mirrors.cloud.tencent
ROCKY_URL: mirrors.sjtug.sjtu.edu
KEEPALIVED_FILE: keepalived-2.2.4.tar.gz
SRC_DIR: /usr/local/src
KEEPALIVED_INSTALL_DIR: /apps/keepalived
STATE: BACKUP
PRIORITY: 90
VIP: 172.31.3.188[root@ansible-server keepalived-backup]# vim templates/PowerTools.repo.j2 
[PowerTools]
name=PowerTools
{% if ansible_distribution =="Rocky" %}
baseurl=https://{{ ROCKY_URL }}/rocky/$releasever/PowerTools/$basearch/os/
{% elif ansible_distribution=="CentOS" %}
baseurl=https://{{ URL }}/centos/$releasever/PowerTools/$basearch/os/
{% endif %}
gpgcheck=1
{% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{% endif %}[root@ansible-server keepalived-backup]# vim templates/keepalived.conf.j2
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVELscript_user rootenable_script_security
}vrrp_script check_haoroxy {script "/etc/keepalived/check_haproxy.sh"interval 5weight -5fall 2  rise 1
}vrrp_instance VI_1 {state {{ STATE }}interface {{ ansible_default_ipv4.interface }}virtual_router_id 51priority {{ PRIORITY }}advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {{{ VIP }} dev {{ ansible_default_ipv4.interface }} label {{ ansible_default_ipv4.interface }}:1}track_script {check_haproxy}
}[root@ansible-server keepalived-backup]# vim tasks/install_package.yml
- name: find "[PowerTools]" mirror warehousefind:path: /etc/yum.repos.d/contains: '\[PowerTools\]'register: RETURNwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_distribution_major_version=="8"
- name: copy repo filetemplate:src: PowerTools.repo.j2dest: /etc/yum.repos.d/PowerTools.repowhen: - (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") and (ansible_distribution_major_version=="8") - RETURN.matched == 0
- name: install CentOS8 or Rocky8 depend on the packageyum:name: make,gcc,ipvsadm,autoconf,automake,openssl-devel,libnl3-devel,iptables-devel,ipset-devel,file-devel,net-snmp-devel,glib2-devel,pcre2-devel,libnftnl-devel,libmnl-devel,systemd-develwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_distribution_major_version=="8"
- name: install CentOS7 depend on the packageyum:name: make,gcc,libnfnetlink-devel,libnfnetlink,ipvsadm,libnl,libnl-devel,libnl3,libnl3-devel,lm_sensors-libs,net-snmp-agent-libs,net-snmp-libs,openssh-server,openssh-clients,openssl,openssl-devel,automake,iproutewhen:- ansible_distribution=="CentOS"- ansible_distribution_major_version=="7"
- name: delete lock filesfile:path: "{{ item }}"state: absentloop:- /var/lib/dpkg/lock- /var/lib/apt/lists/lock- /var/cache/apt/archives/lockwhen:- ansible_distribution=="Ubuntu"
- name: apt updateapt:update_cache: yes force: yes when:- ansible_distribution=="Ubuntu"
- name: install Ubuntu 20.04 depend on the packageapt:name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-devforce: yes when:- ansible_distribution=="Ubuntu"- ansible_distribution_major_version=="20"
- name: install Ubuntu 18.04 depend on the packageapt:name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,iptables-dev,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-devforce: yes when:- ansible_distribution=="Ubuntu"- ansible_distribution_major_version=="18"[root@ansible-server keepalived-backup]# vim tasks/keepalived_file.yml
- name: unarchive  keepalived packageunarchive:src: "{{ KEEPALIVED_FILE }}"dest: "{{ SRC_DIR }}"[root@ansible-server keepalived_backup]# vim tasks/build.yml
- name: get KEEPALIVED_DIR directoryshell:cmd: echo {{ KEEPALIVED_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'register: KEEPALIVED_DIR
- name: Build and install Keepalivedshell: chdir: "{{ SRC_DIR }}/{{ KEEPALIVED_DIR.stdout }}"cmd: ./configure --prefix={{ KEEPALIVED_INSTALL_DIR }} --disable-fwmark
- name: make && make installshell:chdir: "{{ SRC_DIR }}/{{ KEEPALIVED_DIR.stdout }}"cmd: make -j {{ ansible_processor_vcpus }} && make install[root@ansible-server keepalived-backup]# vim tasks/config.yml
- name: create /etc/keepalived directoryfile:path: /etc/keepalivedstate: directory
- name: copy keepalived.conf filetemplate:src: keepalived.conf.j2dest: /etc/keepalived/keepalived.conf
- name: copy check_haproxy.sh filecopy:src: check_haproxy.shdest: /etc/keepalived/mode: 0755
- name: copy keepalived.service filecopy:remote_src: Truesrc: "{{ SRC_DIR }}/{{ KEEPALIVED_DIR.stdout }}/keepalived/keepalived.service"dest: /lib/systemd/system/
- name: PATH variablecopy:content: 'PATH={{ KEEPALIVED_INSTALL_DIR }}/sbin:$PATH'dest: /etc/profile.d/keepalived.sh
- name: PATH variable entryshell:cmd: . /etc/profile.d/keepalived.sh[root@ansible-server keepalived-backup]# vim tasks/service.yml
- name: start keepalivedsystemd:name: keepalivedstate: startedenabled: yesdaemon_reload: yes[root@ansible-server keepalived-backup]# vim tasks/main.yml
- include: install_package.yml
- include: keepalived_file.yml
- include: build.yml
- include: config.yml
- include: service.yml[root@ansible-server keepalived-backup]# cd ../../
[root@ansible-server ansible]# tree roles/keepalived-backup/
roles/keepalived-backup/
├── files
│   ├── check_haproxy.sh
│   └── keepalived-2.2.4.tar.gz
├── tasks
│   ├── build.yml
│   ├── config.yml
│   ├── install_package.yml
│   ├── keepalived_file.yml
│   ├── main.yml
│   └── service.yml
├── templates
│   ├── keepalived.conf.j2
│   └── PowerTools.repo.j2
└── vars└── main.yml4 directories, 11 files[root@ansible-server ansible]# vim keepalived_backup_role.yml 
---
- hosts: keepalives_backuproles:- role: keepalived-backup[root@ansible-server ansible]# ansible-playbook keepalived_backup_role.yml 

9.harbor

9.1 docker基于二进制包

[root@ansible-server ansible]# mkdir -p roles/docker-binary/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/docker-binary/
[root@ansible-server docker-binary]# ls
files  tasks  vars  templates[root@ansible-server docker-binary]# wget .10.12.tgz -P files/#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server docker-binary]# vim vars/main.yml
DOCKER_VERSION: 20.10.14
HARBOR_DOMAIN: harbor.raymonds[root@ansible-server docker-binary]# vim files/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=
After=network-online.target firewalld.service
Wants=network-online.target[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s[Install]
WantedBy=multi-user.target[root@ansible-server docker-binary]# vim templates/daemon.json
{"registry-mirrors": ["","",""],"insecure-registries": ["{{ HARBOR_DOMAIN }}"],"exec-opts": ["native.cgroupdriver=systemd"],"max-concurrent-downloads": 10,"max-concurrent-uploads": 5,"log-opts": {"max-size": "300m","max-file": "2"},"live-restore": true
}[root@ansible-server docker-binary]# vim tasks/docker_files.yml
- name: unarchive  docker packageunarchive:src: "docker-{{ DOCKER_VERSION }}.tgz"dest: /usr/local/src
- name: move docker filesshell:cmd: mv /usr/local/src/docker/* /usr/bin/[root@ansible-server docker-binary]# vim tasks/service_file.yml
- name: copy docker.service filecopy:src: docker.servicedest: /lib/systemd/system/docker.service[root@ansible-server docker-binary]# vim tasks/set_mirror_accelerator.yml
- name: mkdir /etc/dockerfile:path: /etc/dockerstate: directory
- name: set mirror_acceleratortemplate:src: daemon.json.j2dest: /etc/docker/daemon.json[root@ansible-server docker-binary]# vim tasks/set_alias.yml
- name: set docker aliaslineinfile:path: ~/.bashrcline: "{{ item }}"loop:- "alias rmi=\"docker images -qa|xargs docker rmi -f\""- "alias rmc=\"docker ps -qa|xargs docker rm -f\""[root@ansible-server docker-binary]# vim tasks/service.yml
- name: start dockersystemd:name: dockerstate: startedenabled: yesdaemon_reload: yes[root@ansible-server docker-binary]# vim tasks/set_swap.yml
- name: set WARNING No swap limit supportreplace:path: /etc/default/grubregexp: '^(GRUB_CMDLINE_LINUX=.*)\"$'replace: '\1 swapaccount=1"'when:- ansible_distribution=="Ubuntu"
- name: update-grubshell:cmd: update-grubwhen:- ansible_distribution=="Ubuntu"
- name: reboot Ubuntu systemreboot:when:- ansible_distribution=="Ubuntu"[root@ansible-server docker-binary]# vim tasks/main.yml
- include: docker_files.yml
- include: service_file.yml
- include: set_mirror_accelerator.yml
- include: set_alias.yml
- include: service.yml
- include: set_swap.yml[root@ansible-server docker-binary]# cd ../../
[root@ansible-server ansible]# tree roles/docker-binary/
roles/docker-binary/
├── files
│   ├── docker-19.03.9.tgz
│   └── docker.service
├── tasks
│   ├── docker_files.yml
│   ├── main.yml
│   ├── service_file.yml
│   ├── service.yml
│   ├── set_alias.yml
│   ├── set_mirror_accelerator.yml
│   └── set_swap.yml
├── templates
│   └── daemon.json.j2
└── vars└── main.yml4 directories, 11 files

9.2 docker-compose

[root@ansible-server ansible]# mkdir -p roles/docker-compose/{tasks,files}
[root@ansible-server ansible]# cd roles/docker-compose/
[root@ansible-server docker-compose]# ls
files  tasks[root@ansible-server docker-compose]# wget .29.2/docker-compose-Linux-x86_64 -P files[root@ansible-server docker-compose]# vim tasks/install_docker_compose.yml
- name: copy docker compose filecopy:src: docker-compose-linux-x86_64dest: /usr/bin/docker-composemode: 755[root@ansible-server docker-compose]# vim tasks/main.yml
- include: install_docker_compose.yml[root@ansible-server ansible]# tree roles/docker-compose/
roles/docker-compose/
├── files
│   └── docker-compose-linux-x86_64
└── tasks├── install_docker_compose.yml└── main.yml2 directories, 3 files

9.3 harbor

[root@ansible-server ansible]# mkdir -p roles/harbor/{tasks,files,templates,vars,meta}[root@ansible-server ansible]# cd roles/harbor/
[root@ansible-server harbor]# ls
files  meta  tasks  templates  vars[root@ansible-server harbor]# wget .4.1/harbor-offline-installer-v2.4.1.tgz -P files/[root@ansible-server harbor]# vim templates/harbor.service.j2
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f {{ HARBOR_INSTALL_DIR }}/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f {{ HARBOR_INSTALL_DIR }}/harbor/docker-compose.yml down[Install]
WantedBy=multi-user.target[root@ansible-server harbor]# vim vars/main.yml
HARBOR_INSTALL_DIR: /apps
HARBOR_VERSION: 2.5.0
HARBOR_ADMIN_PASSWORD: 123456[root@ansible-server harbor]# vim tasks/harbor_files.yml
- name: create HARBOR_INSTALL_DIR directoryfile:path: "{{ HARBOR_INSTALL_DIR }}"state: directory
- name: unarchive  harbor packageunarchive:src: "harbor-offline-installer-v{{ HARBOR_VERSION }}.tgz"dest: "{{ HARBOR_INSTALL_DIR }}/"creates: "{{ HARBOR_INSTALL_DIR }}/harbor"[root@ansible-server harbor]# vim tasks/config.yml
- name: mv harbor.ymlshell: cmd: mv {{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml.tmpl {{ HARBOR_INSTALL_DIR }}/harbor/harbor.ymlcreates: "{{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
- name: set harbor.yml file 'hostname' string linereplace: path: "{{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"regexp: '^(hostname:) .*'replace: '\1 {{ ansible_default_ipv4.address }}'
- name: set harbor.yml file 'harbor_admin_password' string linereplace: path: "{{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"regexp: '^(harbor_admin_password:) .*'replace: '\1 {{ HARBOR_ADMIN_PASSWORD }}'
- name: set harbor.yml file 'https' string linereplace:path: "{{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"regexp: '^(https:)'replace: '#\1'
- name: set harbor.yml file 'port' string linereplace: path: "{{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"regexp: '  (port: 443)'replace: '#  \1'
- name: set harbor.yml file 'certificate' string linereplace: path: "{{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"regexp: '  (certificate: .*)'replace: '#  \1'
- name: set harbor.yml file 'private_key' string linereplace: path: "{{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"regexp: '  (private_key: .*)'replace: '#  \1'[root@ansible-server harbor]# vim tasks/install_python.yml
- name: install CentOS or Rocky pythonyum:name: python3when:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete lock filesfile:path: "{{ item }}"state: absentloop:- /var/lib/dpkg/lock- /var/lib/apt/lists/lock- /var/cache/apt/archives/lockwhen:- ansible_distribution=="Ubuntu"
- name: apt updateapt:update_cache: yes force: yes when:- ansible_distribution=="Ubuntu"
- name: install Ubuntu pythonapt:name: python3when:- ansible_distribution=="Ubuntu"[root@ansible-server harbor]# vim tasks/install_harbor.yml
- name: install harborshell:cmd: "{{ HARBOR_INSTALL_DIR }}/harbor/install.sh"[root@ansible-server harbor]# vim tasks/service_file.yml
- name: copy harbor.servicetemplate:src: harbor.service.j2dest: /lib/systemd/system/harbor.service[root@ansible-server harbor]# vim tasks/service.yml
- name: service enablesystemd:name: harborstate: startedenabled: yesdaemon_reload: yes[root@ansible-server harbor]# vim tasks/main.yml
- include: harbor_files.yml
- include: config.yml
- include: install_python.yml
- include: install_harbor.yml
- include: service_file.yml
- include: service.yml#这里是harbor依赖的角色,docker-binary就是docker基于二进制安装,根据情况修改
[root@ansible-server harbor]# vim meta/main.yml
dependencies:- role: docker-binary- role: docker-compose[root@ansible-server harbor]# cd ../../
[root@ansible-server ansible]# tree roles/harbor/
roles/harbor/
├── files
│   └── harbor-offline-installer-v2.4.1.tgz
├── meta
│   └── main.yml
├── tasks
│   ├── config.yml
│   ├── harbor_files.yml
│   ├── install_harbor.yml
│   ├── install_python.yml
│   ├── main.yml
│   ├── service_file.yml
│   └── service.yml
├── templates
│   └── harbor.service.j2
└── vars└── main.yml5 directories, 11 files[root@ansible-server ansible]# vim harbor_role.yml
---
- hosts: harborroles:- role: harbor[root@ansible-server ansible]# ansible-playbook harbor_role.yml

9.4 创建harbor仓库

这步一定要做,不然后面镜像下载了上传不到harbor,ansible会执行出错
在harbor01新建项目google_containers


在harbor02新建项目google_containers


在harbor02上新建目标


在harbor02上新建规则


在harbor01上新建目标


在harbor01上新建规则

10.部署etcd

10.1 安装etcd

[root@ansible-server ansible]# mkdir -p roles/etcd/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/etcd/
[root@ansible-server etcd]# ls
files  tasks  templates  vars[root@ansible-server etcd]# wget .5.1/etcd-v3.5.1-linux-amd64.tar.gz
[root@ansible-server etcd]# mkdir files/etcd
[root@ansible-server etcd]# tar -xf etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C files/etcd/ etcd-v3.5.1-linux-amd64/etcd{,ctl}
[root@ansible-server etcd]# ls files/etcd/
etcd  etcdctl
[root@ansible-server etcd]# rm -f etcd-v3.5.1-linux-amd64.tar.gz[root@ansible-server etcd]# vim tasks/copy_etcd_file.yml
- name: copy etcd files to etcdcopy:src: "etcd/{{ item }}"dest: /usr/local/bin/mode: 0755loop:- etcd- etcdctlwhen:- inventory_hostname in groups.etcd
- name: create /opt/cni/bin directoryfile:path: /opt/cni/binstate: directorywhen:- inventory_hostname in groups.etcd[root@ansible-server etcd]# wget ".2/cfssl_linux-amd64" -O files/cfssl
[root@ansible-server etcd]# wget ".2/cfssljson_linux-amd64" -O files/cfssljson#下面ETCD02和ETCD03的IP地址根据自己的更改
[root@ansible-server etcd]# vim vars/main.yml
ETCD_CLUSTER: etcd
K8S_CLUSTER: kubernetes
ETCD_CERT:- etcd-ca-key.pem- etcd-ca.pem- etcd-key.pem- etcd.pemETCD02: 172.31.3.109
ETCD03: 172.31.3.110[root@ansible-server etcd]# mkdir templates/pki
[root@ansible-server etcd]# vim templates/pki/etcd-ca-csr.json.j2
{"CN": "{{ ETCD_CLUSTER }}","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "Etcd Security"}],"ca": {"expiry": "876000h"}
}[root@ansible-server etcd]# vim templates/pki/ca-config.json.j2
{"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
}[root@ansible-server etcd]# vim templates/pki/etcd-csr.json.j2 
{"CN": "{{ ETCD_CLUSTER }}","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "Etcd Security"}]
}[root@ansible-server etcd]# vim tasks/create_etcd_cert.yml
- name: copy cfssl and cfssljson toolscopy: src: "{{ item }}" dest: /usr/local/binmode: 0755loop: - cfssl- cfssljsonwhen:- ansible_hostname=="k8s-etcd01"
- name: create /etc/etcd/ssl directoryfile:path: /etc/etcd/sslstate: directorywhen:- inventory_hostname in groups.etcd
- name: create pki directoryfile:path: /root/pkistate: directorywhen:- ansible_hostname=="k8s-etcd01"
- name: copy pki filestemplate: src: "pki/{{ item }}.j2" dest: "/root/pki/{{ item }}"loop: - etcd-ca-csr.json- ca-config.json- etcd-csr.jsonwhen:- ansible_hostname=="k8s-etcd01"
- name: create etcd-ca certshell:chdir: /root/pkicmd: cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacreates: /etc/etcd/ssl/etcd-ca.pemwhen:- ansible_hostname=="k8s-etcd01"
- name: create etcd certshell:chdir: /root/pkicmd: "cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,{% for i in groups.etcd %}{{ hostvars[i].ansible_hostname}},{% endfor %}{% for i in groups.etcd %}{{ hostvars[i].ansible_default_ipv4.address }}{% if not loop.last %},{% endif %}{% endfor %} -profile={{ K8S_CLUSTER }} etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd"creates: /etc/etcd/ssl/etcd-key.pemwhen:- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to etcd02synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ ETCD02 }}"when:- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to etcd03synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ ETCD03 }}"when:- ansible_hostname=="k8s-etcd01"[root@ansible-server etcd]# mkdir templates/config
[root@ansible-server etcd]# vim templates/config/etcd.config.yml.j2
name: '{{ inventory_hostname }}'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://{{ ansible_default_ipv4.address }}:2380'
listen-client-urls: 'https://{{ ansible_default_ipv4.address }}:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://{{ ansible_default_ipv4.address }}:2380'
advertise-client-urls: 'https://{{ ansible_default_ipv4.address }}:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: '{% for i in groups.etcd %}{{ hostvars[i].inventory_hostname }}=https://{{ hostvars[i].ansible_default_ipv4.address }}:2380{% if not loop.last %},{% endif %}{% endfor %}'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
peer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false[root@ansible-server etcd]# mkdir files/service
[root@ansible-server etcd]# vim files/service/etcd.service
[Unit]
Description=Etcd Service
Documentation=/
After=network.target[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
Alias=etcd3.service[root@ansible-server etcd]# vim tasks/etcd_config.yml
- name: copy etcd_config filetemplate: src: config/etcd.config.yml.j2dest: /etc/etcd/etcd.config.ymlwhen:- inventory_hostname in groups.etcd
- name: copy etcd.service filecopy: src: service/etcd.servicedest: /lib/systemd/system/etcd.servicewhen:- inventory_hostname in groups.etcd
- name: create /etc/kubernetes/pki/etcd directoryfile:path: /etc/kubernetes/pki/etcdstate: directorywhen:- inventory_hostname in groups.etcd
- name: link etcd_ssl to kubernetes pkifile: src: "/etc/etcd/ssl/{{ item }}"dest: "/etc/kubernetes/pki/etcd/{{ item }}"state: linkloop:"{{ ETCD_CERT }}"when:- inventory_hostname in groups.etcd
- name: start etcdsystemd:name: etcdstate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.etcd[root@ansible-server etcd]# vim tasks/main.yml
- include: copy_etcd_file.yml
- include: create_etcd_cert.yml
- include: etcd_config.yml[root@ansible-server etcd]# cd ../../
[root@ansible-server ansible]# tree roles/etcd/
roles/etcd/
├── files
│   ├── cfssl
│   ├── cfssljson
│   ├── etcd
│   │   ├── etcd
│   │   └── etcdctl
│   └── service
│       └── etcd.service
├── tasks
│   ├── copy_etcd_file.yml
│   ├── create_etcd_cert.yml
│   ├── etcd_config.yml
│   └── main.yml
├── templates
│   ├── config
│   │   └── etcd.config.yml.j2
│   └── pki
│       ├── ca-config.json.j2
│       ├── etcd-ca-csr.json.j2
│       └── etcd-csr.json.j2
└── vars└── main.yml8 directories, 14 files[root@ansible-server ansible]# vim etcd_role.yml
---
- hosts: etcdroles:- role: etcd[root@ansible-server ansible]# ansible-playbook etcd_role.yml

10.2 验证etcd

[root@k8s-etcd01 ~]# export ETCDCTL_API=3[root@k8s-etcd01 ~]# etcdctl --endpoints="172.31.3.108:2379,172.31.3.109:2379,172.31.3.110:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.31.3.108:2379 | a9fef56ff96ed75c |   3.5.1 |   25 kB |     false |      false |         2 |          8 |                  8 |        |
| 172.31.3.109:2379 | 8319ef09e8b3d277 |   3.5.1 |   20 kB |      true |      false |         2 |          8 |                  8 |        |
| 172.31.3.110:2379 | 209a1f57c506dba2 |   3.5.1 |   20 kB |     false |      false |         2 |          8 |                  8 |        |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

11.部署Containerd

[root@ansible-server ansible]# mkdir -p roles/containerd-binary/{tasks,files,vars}
[root@ansible-server ansible]# cd roles/containerd-binary/
[root@ansible-server containerd-binary]# ls
files  tasks  vars[root@ansible-server containerd-binary]# wget .10.12.tgz -P files/#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server containerd-binary]# vim vars/main.yml
DOCKER_VERSION: 20.10.14
HARBOR_DOMAIN: harbor.raymonds
USERNAME: admin
PASSWORD: 123456[root@ansible-server containerd-binary]# vim files/containerd.service 
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     .0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.[Unit]
Description=containerd container runtime
Documentation=
After=network.target local-fs.target[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerdType=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999[Install]
WantedBy=multi-user.target[root@ansible-server containerd]# vim tasks/docker_files.yml
- name: unarchive  docker packageunarchive:src: "docker-{{ DOCKER_VERSION }}.tgz"dest: /usr/local/src
- name: move docker filesshell:cmd: mv /usr/local/src/docker/* /usr/bin/[root@ansible-server containerd-binary]# vim tasks/service_file.yml
- name: copy containerd.service filecopy:src: containerd.servicedest: /lib/systemd/system/containerd.service[root@ansible-server containerd-binary]# vim tasks/config_containerd.yml 
- name: load modules and kernelshell:cmd: |cat > /etc/modules-load.d/containerd.conf <<-EOFoverlaybr_netfilterEOFmodprobe -- overlaymodprobe -- br_netfiltercat > /etc/sysctl.d/99-kubernetes-cri.conf <<-EOFnet.bridge.bridge-nf-call-iptables  = 1net.ipv4.ip_forward                 = 1net.bridge.bridge-nf-call-ip6tables = 1EOFsysctl --system
- name: mkdir /etc/containerdfile:path: /etc/containerdstate: directory
- name: set Containerd config fileshell:cmd: containerd config default | tee /etc/containerd/config.toml
- name: set SystemdCgroup linereplace:path: /etc/containerd/config.tomlregexp: '(.*SystemdCgroup = ).*'replace: '\1true'
- name: set sandbox_image linereplace:path: /etc/containerd/config.tomlregexp: '(.*sandbox_image = ).*'replace: '\1"registry.aliyuncs/google_containers/pause:3.6"'
- name: set Mirror Acceleratorlineinfile:path: /etc/containerd/config.tomlinsertafter: '.*registry.mirrors.*'line: "        [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"docker.io\"]\n          endpoint = [\"\" ,\"\" ,\"\"]\n        [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"{{ HARBOR_DOMAIN }}\"]\n          endpoint = [\"http://{{ HARBOR_DOMAIN }}\"]"
- name: set Private warehouse certificationlineinfile:path: /etc/containerd/config.tomlinsertafter: '.*registry.configs.*'line: "        [plugins.\"io.containerd.grpc.v1.cri\".registry.configs.\"{{ HARBOR_DOMAIN }}\".tls]\n          insecure_skip_verify = true\n        [plugins.\"io.containerd.grpc.v1.cri\".registry.configs.\"{{ HARBOR_DOMAIN }}\".auth]\n          username = \"{{ USERNAME }}\"\n          password = \"{{ PASSWORD }}\""[root@ansible-server containerd-binary]# vim tasks/service.yml
- name: start containerdsystemd:name: containerdstate: startedenabled: yesdaemon_reload: yes[root@ansible-server containerd-binary]# vim tasks/set_crictl.yml
- name: set crictl.yamlshell:cmd: |cat > /etc/crictl.yaml <<-EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF[root@ansible-server containerd-binary]# vim tasks/set_alias.yml
- name: set containerd aliaslineinfile:path: ~/.bashrcline: "{{ item }}"loop:- "alias rmi=\"ctr images list -q|xargs ctr images rm\""[root@ansible-server containerd-binary]# vim tasks/main.yml
- include: docker_files.yml
- include: service_file.yml
- include: config_containerd.yml
- include: service.yml
- include: set_crictl.yml
- include: set_alias.yml[root@ansible-server containerd-binary]# cd ../../
[root@ansible-server ansible]# tree roles/containerd-binary/
roles/containerd-binary/
├── files
│   ├── containerd.service
│   └── docker-20.10.14.tgz
├── tasks
│   ├── config_containerd.yml
│   ├── docker_files.yml
│   ├── main.yml
│   ├── service_file.yml
│   ├── service.yml
│   ├── set_alias.yml
│   └── set_crictl.yml
└── vars└── main.yml3 directories, 10 files[root@ansible-server ansible]# vim containerd_binary_role.yml 
---
- hosts: k8s_clusterroles:- role: containerd-binary[root@ansible-server ansible]# ansible-playbook containerd_binary_role.yml 

12.部署master

12.1 安装master组件

[root@ansible-server ansible]# mkdir -p roles/kubernetes-master/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/kubernetes-master/
[root@ansible-server kubernetes-master]# ls
files  tasks  templates  vars#下面MASTER01、MASTER02和MASTER03的IP地址根据自己的更改
[root@ansible-server kubernetes-master]# vim vars/main.yml
ETCD_CERT:- etcd-ca-key.pem- etcd-ca.pem- etcd-key.pem- etcd.pemMASTER01: 172.31.3.101
MASTER02: 172.31.3.102
MASTER03: 172.31.3.103[root@ansible-server kubernetes-master]# vim tasks/copy_etcd_cert.yml
- name: create /etc/etcd/ssl directoryfile:path: /etc/etcd/sslstate: directorywhen:- inventory_hostname in groups.master
- name: transfer etcd-ca-key.pem file from etcd01 to master01synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ MASTER01 }}"when:- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to master02synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ MASTER02 }}"when:- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to master03synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ MASTER03 }}"when:- ansible_hostname=="k8s-etcd01"
- name: create /etc/kubernetes/pki/etcd directoryfile:path: /etc/kubernetes/pki/etcdstate: directorywhen:- inventory_hostname in groups.master
- name: link etcd_ssl to kubernetes pkifile: src: "/etc/etcd/ssl/{{ item }}"dest: "/etc/kubernetes/pki/etcd/{{ item }}"state: linkloop:"{{ ETCD_CERT }}"when:- inventory_hostname in groups.master[root@ansible-server kubernetes-master]# wget .23.6/kubernetes-server-linux-amd64.tar.gz
[root@ansible-server kubernetes-master]# mkdir files/bin
[root@ansible-server kubernetes-master]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C files/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
[root@ansible-server kubernetes-master]# ls files/bin/
kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler
[root@ansible-server kubernetes-master]# rm -f kubernetes-server-linux-amd64.tar.gz[root@ansible-server kubernetes-master]# vim tasks/copy_kubernetes_file.yml
- name: copy kubernetes files to mastercopy:src: "bin/{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kube-apiserver- kube-controller-manager- kubectl- kubelet- kube-proxy- kube-schedulerwhen:- inventory_hostname in groups.master
- name: create /opt/cni/bin directoryfile:path: /opt/cni/binstate: directorywhen:- inventory_hostname in groups.master[root@ansible-server kubernetes-master]# wget ".2/cfssl_linux-amd64" -O files/cfssl
[root@ansible-server kubernetes-master]# wget ".2/cfssljson_linux-amd64" -O files/cfssljson
[root@ansible-server kubernetes-master]# ls files/
bin  cfssl  cfssljson#下面SERVICE_IP变量改成自己规划的service_ip地址,VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server kubernetes-master]# vim vars/main.yml
...
SERVICE_IP: 10.96.0.1 
VIP: 172.31.3.188
K8S_CLUSTER: kubernetesKUBERNETES_CERT:- ca.csr- ca-key.pem- ca.pem- apiserver.csr- apiserver-key.pem- apiserver.pem- front-proxy-ca.csr- front-proxy-ca-key.pem- front-proxy-ca.pem- front-proxy-client.csr- front-proxy-client-key.pem- front-proxy-client.pem- controller-manager.csr- controller-manager-key.pem- controller-manager.pem- scheduler.csr- scheduler-key.pem- scheduler.pem- admin.csr- admin-key.pem- admin.pem- sa.key- sa.pubKUBECONFIG:- controller-manager.kubeconfig- scheduler.kubeconfig- admin.kubeconfig[root@ansible-server kubernetes-master]# mkdir templates/pki
[root@ansible-server kubernetes-master]# vim templates/pki/ca-csr.json.j2
{"CN": "{{ K8S_CLUSTER }}","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}],"ca": {"expiry": "876000h"}
}[root@ansible-server kubernetes-master]# vim templates/pki/ca-config.json.j2
{"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
}[root@ansible-server kubernetes-master]# vim templates/pki/apiserver-csr.json.j2
{"CN": "kube-apiserver","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}]
}[root@ansible-server kubernetes-master]# vim templates/pki/front-proxy-ca-csr.json.j2
{"CN": "{{ K8S_CLUSTER }}","key": {"algo": "rsa","size": 2048}
}[root@ansible-server kubernetes-master]# vim templates/pki/front-proxy-client-csr.json.j2
{"CN": "front-proxy-client","key": {"algo": "rsa","size": 2048}
}[root@ansible-server kubernetes-master]# vim templates/pki/manager-csr.json.j2
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "Kubernetes-manual"}]
}[root@ansible-server kubernetes-master]# vim templates/pki/scheduler-csr.json.j2
{"CN": "system:kube-scheduler","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes-manual"}]
}[root@ansible-server kubernetes-master]# vim templates/pki/admin-csr.json.j2
{"CN": "admin","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters","OU": "Kubernetes-manual"}]
}[root@ansible-server kubernetes-master]# vim tasks/create_kubernetes_cert.yml
- name: create /etc/kubernetes/pki directoryfile:path: /etc/kubernetes/pkistate: directorywhen:- inventory_hostname in groups.master
- name: copy cfssl and cfssljson toolscopy: src: "{{ item }}" dest: /usr/local/binmode: 0755loop: - cfssl- cfssljsonwhen:- ansible_hostname=="k8s-master01"
- name: create pki directoryfile:path: /root/pkistate: directorywhen:- ansible_hostname=="k8s-master01"
- name: copy pki filestemplate: src: "pki/{{ item }}.j2" dest: "/root/pki/{{ item }}"loop: - ca-csr.json- ca-config.json- apiserver-csr.json- front-proxy-ca-csr.json- front-proxy-client-csr.json- manager-csr.json- scheduler-csr.json- admin-csr.jsonwhen:- ansible_hostname=="k8s-master01"
- name: create ca certshell:chdir: /root/pkicmd: cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/cacreates: /etc/kubernetes/pki/ca.pemwhen:- ansible_hostname=="k8s-master01"
- name: create apiserver certshell:chdir: /root/pkicmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname={{ SERVICE_IP }},{{ VIP }},127.0.0.1,{{ K8S_CLUSTER }},{{ K8S_CLUSTER }}.default,{{ K8S_CLUSTER }}.default.svc,{{ K8S_CLUSTER }}.default.svc.cluster,{{ K8S_CLUSTER }}.default.svc.cluster.local,{% for i in groups.master %}{{ hostvars[i].ansible_default_ipv4.address }}{% if not loop.last %},{% endif %}{% endfor %} -profile={{ K8S_CLUSTER }} apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiservercreates: /etc/kubernetes/pki/apiserver.pemwhen:- ansible_hostname=="k8s-master01"
- name: create front-proxy-ca certshell:chdir: /root/pkicmd: cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-cacreates: /etc/kubernetes/pki/front-proxy-ca.pemwhen:- ansible_hostname=="k8s-master01"
- name: create front-proxy-client certshell:chdir: /root/pkicmd: cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile={{ K8S_CLUSTER }} front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-clientcreates: /etc/kubernetes/pki/front-proxy-client.pemwhen:- ansible_hostname=="k8s-master01"
- name: create controller-manager certshell:chdir: /root/pkicmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile={{ K8S_CLUSTER }} manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-managercreates: /etc/kubernetes/pki/controller-manager.pemwhen:- ansible_hostname=="k8s-master01"
- name: set-cluster controller-manager.kubeconfigshell:cmd: kubectl config set-cluster {{ K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{{ VIP }}:6443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-credentials controller-manager.kubeconfigshell:cmd: kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-context controller-manager.kubeconfigshell:cmd: kubectl config set-context system:kube-controller-manager@{{ K8S_CLUSTER }} --cluster={{ K8S_CLUSTER }} --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: use-context controller-manager.kubeconfigshell:cmd: kubectl config use-context system:kube-controller-manager@{{ K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: create scheduler certshell:chdir: /root/pkicmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile={{ K8S_CLUSTER }} scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/schedulercreates: /etc/kubernetes/pki/scheduler.pemwhen:- ansible_hostname=="k8s-master01"
- name: set-cluster scheduler.kubeconfigshell:cmd: kubectl config set-cluster {{ K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{{ VIP }}:6443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-credentials scheduler.kubeconfigshell:cmd: kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-context scheduler.kubeconfigshell:cmd: kubectl config set-context system:kube-scheduler@{{ K8S_CLUSTER }} --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: use-context scheduler.kubeconfigshell:cmd: kubectl config use-context system:kube-scheduler@{{ K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/scheduler.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: create admin certshell:chdir: /root/pkicmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile={{ K8S_CLUSTER }} admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admincreates: /etc/kubernetes/pki/admin.pemwhen:- ansible_hostname=="k8s-master01"
- name: set-cluster admin.kubeconfigshell:cmd: kubectl config set-cluster {{ K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{{ VIP }}:6443 --kubeconfig=/etc/kubernetes/admin.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-credentials admin.kubeconfigshell:cmd: kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-context admin.kubeconfigshell:cmd: kubectl config set-context kubernetes-admin@{{ K8S_CLUSTER }} --cluster={{ K8S_CLUSTER }} --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: use-context admin.kubeconfigshell:cmd: kubectl config use-context kubernetes-admin@{{ K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/admin.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: create sa.keyshell:cmd: openssl genrsa -out /etc/kubernetes/pki/sa.key 2048creates: /etc/kubernetes/pki/sa.keywhen:- ansible_hostname=="k8s-master01"
- name: create sa.pubshell:cmd: openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pubcreates: /etc/kubernetes/pki/sa.pubwhen:- ansible_hostname=="k8s-master01"
- name: transfer cert files from master01 to master02synchronize:src: "/etc/kubernetes/pki/{{ item }}"dest: /etc/kubernetes/pkimode: pullloop:"{{ KUBERNETES_CERT }}"delegate_to: "{{ MASTER02 }}"when:- ansible_hostname=="k8s-master01"
- name: transfer cert files from master01 to master03synchronize:src: "/etc/kubernetes/pki/{{ item }}"dest: /etc/kubernetes/pkimode: pullloop:"{{ KUBERNETES_CERT }}"delegate_to: "{{ MASTER03 }}"when:- ansible_hostname=="k8s-master01"
- name: transfer kubeconfig files from master01 to master02synchronize:src: "/etc/kubernetes/{{ item }}"dest: /etc/kubernetes/mode: pullloop:"{{ KUBECONFIG }}"delegate_to: "{{ MASTER02 }}"when:- ansible_hostname=="k8s-master01"
- name: transfer kubeconfig files from master01 to master03synchronize:src: "/etc/kubernetes/{{ item }}"dest: /etc/kubernetes/mode: pullloop:"{{ KUBECONFIG }}"delegate_to: "{{ MASTER03 }}"when:- ansible_hostname=="k8s-master01"#SERVICE_SUBNET改成自己规划的service网段地址,POD_SUBNET改成自己规划的容器网段,MASTER变量ip改成master02和master03的IP,HARBOR_DOMAIN的地址设置成自己的harbor域名地址,CLUSTERDNS改成service网段的第10个IP
[root@ansible-server kubernetes-master]# vim vars/main.yml
...
KUBE_DIRECTROY:- /etc/kubernetes/manifests/- /etc/systemd/system/kubelet.service.d- /var/lib/kubelet- /var/log/kubernetesSERVICE_SUBNET: 10.96.0.0/12
POD_SUBNET: 192.168.0.0/12MASTER:- 172.31.3.102- 172.31.3.103CLUSTERDNS: 10.96.0.10
PKI_DIR: /etc/kubernetes/pki
K8S_DIR: /etc/kubernetes [root@ansible-server kubernetes-master]# mkdir templates/service
[root@ansible-server kubernetes-master]# vim templates/service/kube-apiserver.service.j2
[Unit]
Description=Kubernetes API Server
Documentation=
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--logtostderr=true  \--allow-privileged=true  \--bind-address=0.0.0.0  \--secure-port=6443  \--insecure-port=0  \--advertise-address={{ ansible_default_ipv4.address }} \--service-cluster-ip-range={{ SERVICE_SUBNET }}  \--service-node-port-range=30000-32767  \--etcd-servers={% for i in groups.etcd %}https://{{ hostvars[i].ansible_default_ipv4.address }}:2379{% if not loop.last %},{% endif %}{% endfor %} \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \--etcd-certfile=/etc/etcd/ssl/etcd.pem  \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \--client-ca-file=/etc/kubernetes/pki/ca.pem  \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \--service-account-key-file=/etc/kubernetes/pki/sa.pub  \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \--service-account-issuer= \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535[Install]
WantedBy=multi-user.target[root@ansible-server kubernetes-master]# vim templates/service/kube-controller-manager.service.j2
[Unit]
Description=Kubernetes Controller Manager
Documentation=
After=network.target[Service]
ExecStart=/usr/local/bin/kube-controller-manager \--v=2 \--logtostderr=true \--address=127.0.0.1 \--root-ca-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \--service-account-private-key-file=/etc/kubernetes/pki/sa.key \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \--leader-elect=true \--use-service-account-credentials=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--pod-eviction-timeout=2m0s \--controllers=*,bootstrapsigner,tokencleaner \--allocate-node-cidrs=true \--cluster-cidr={{ POD_SUBNET }} \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--node-cidr-mask-size=24Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target[root@ansible-server kubernetes-master]# mkdir files/service/
[root@ansible-server kubernetes-master]# vim files/service/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=
After=network.target[Service]
ExecStart=/usr/local/bin/kube-scheduler \--v=2 \--logtostderr=true \--address=127.0.0.1 \--leader-elect=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.target[root@ansible-server kubernetes-master]# mkdir files/yaml
[root@ansible-server kubernetes-master]# vim files/yaml/bootstrap.secret.yaml
apiVersion: v1
kind: Secret
metadata:name: bootstrap-token-c8ad9cnamespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:description: "The default bootstrap token generated by 'kubelet '."token-id: c8ad9ctoken-secret: 2e4d610cf3e7426eusage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubelet-bootstrap
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: node-autoapprove-bootstrap
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: node-autoapprove-certificate-rotation
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: system:kube-apiservernamespace: ""
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserver[root@ansible-server kubernetes-master]# vim files/service/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=
After=containerd.service
Requires=containerd.service[Service]
ExecStart=/usr/local/bin/kubeletRestart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target[root@ansible-server kubernetes-master]# mkdir files/config
[root@ansible-server kubernetes-master]# vim files/config/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS[root@ansible-server kubernetes-master]# mkdir templates/config
[root@ansible-server kubernetes-master]# vim templates/config/kubelet-conf.yml.j2
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- {{ CLUSTERDNS }}
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s[root@ansible-server kubernetes-master]# vim templates/config/kube-proxy.conf.j2
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5
clusterCIDR: {{ POD_SUBNET }}
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms[root@ansible-server kubernetes-master]# vim files/service/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=
After=network.target[Service]
ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.conf \--v=2Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target[root@ansible-server kubernetes-master]# vim tasks/master_config.yml 
- name: create kubernetes directoryfile:path: "{{ item }}"state: directoryloop:"{{ KUBE_DIRECTROY }}"when:- inventory_hostname in groups.master
- name: copy kube-apiserver.servicetemplate:src: service/kube-apiserver.service.j2dest: /lib/systemd/system/kube-apiserver.servicewhen:- inventory_hostname in groups.master
- name: start kube-apiserversystemd:name: kube-apiserverstate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.master
- name: copy kube-controller-manager.servicetemplate:src: service/kube-controller-manager.service.j2dest: /lib/systemd/system/kube-controller-manager.servicewhen:- inventory_hostname in groups.master
- name: start kube-controller-managersystemd:name: kube-controller-managerstate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.master
- name: copy kube-scheduler.servicecopy:src: service/kube-scheduler.servicedest: /lib/systemd/system/when:- inventory_hostname in groups.master
- name: start kube-schedulersystemd:name: kube-schedulerstate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.master
- name: set-cluster bootstrap-kubelet.kubeconfigshell:cmd: kubectl config set-cluster {{ K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{{ VIP }}:6443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-credentials bootstrap-kubelet.kubeconfigshell:cmd: kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-context bootstrap-kubelet.kubeconfigshell:cmd: kubectl config set-context tls-bootstrap-token-user@{{ K8S_CLUSTER }}  --cluster={{ K8S_CLUSTER }} --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: use-context bootstrap-kubelet.kubeconfigshell:cmd: kubectl config use-context tls-bootstrap-token-user@{{ K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: create user kube config directoryfile: path: /root/.kubestate: directorywhen:- ansible_hostname=="k8s-master01"
- name: copy kubeconfig to user directorycopy: src: /etc/kubernetes/admin.kubeconfig dest: /root/.kube/configremote_src: yeswhen:- ansible_hostname=="k8s-master01"
- name: copy bootstrap.secret.yamlcopy: src: yaml/bootstrap.secret.yamldest: /rootwhen:- ansible_hostname=="k8s-master01"
- name: create pod by bootstrap.secret.yamlshell: chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f bootstrap.secret.yaml"when:- ansible_hostname=="k8s-master01"
- name: transfer bootstrap-kubelet.kubeconfig file from mater01 to master02 master03synchronize:src: /etc/kubernetes/bootstrap-kubelet.kubeconfigdest: /etc/kubernetes/mode: pulldelegate_to: "{{ item }}"loop: "{{ MASTER }}"when:- ansible_hostname=="k8s-master01"
- name: copy kubelet.service to mastercopy:src: service/kubelet.servicedest: /lib/systemd/system/when:- inventory_hostname in groups.master
- name: copy 10-kubelet.conf to mastercopy: src: config/10-kubelet.confdest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- inventory_hostname in groups.master
- name: copy kubelet-conf.yml to mastertemplate: src: config/kubelet-conf.yml.j2dest: /etc/kubernetes/kubelet-conf.ymlwhen:- inventory_hostname in groups.master
- name: start kubelet for mastersystemd:name: kubeletstate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.master
- name: create serviceaccountshell:cmd: kubectl -n kube-system create serviceaccount kube-proxyignore_errors: yeswhen:- ansible_hostname=="k8s-master01"
- name: create clusterrolebindingshell:cmd: kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxyignore_errors: yeswhen:- ansible_hostname=="k8s-master01"
- name: get SECRET varshell:cmd: kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}'register: SECRETwhen:- ansible_hostname=="k8s-master01"
- name: get JWT_TOKEN varshell:cmd: kubectl -n kube-system get secret/{{ SECRET.stdout }} --output=jsonpath='{.data.token}' | base64 -dregister: JWT_TOKENwhen:- ansible_hostname=="k8s-master01"
- name: set-cluster kube-proxy.kubeconfigshell:cmd: kubectl config set-cluster {{ K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{{ VIP }}:6443 --kubeconfig={{ K8S_DIR }}/kube-proxy.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-credentials kube-proxy.kubeconfigshell:cmd: kubectl config set-credentials {{ K8S_CLUSTER }} --token={{ JWT_TOKEN.stdout }} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: set-context kube-proxy.kubeconfigshell:cmd: kubectl config set-context {{ K8S_CLUSTER }} --cluster={{ K8S_CLUSTER }} --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: use-context kube-proxy.kubeconfigshell:cmd: kubectl config use-context {{ K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigwhen:- ansible_hostname=="k8s-master01"
- name: transfer kube-proxy.kubeconfig files from master01 to master02 master03synchronize:src: /etc/kubernetes/kube-proxy.kubeconfigdest: /etc/kubernetes/mode: pulldelegate_to: "{{ item }}"loop:"{{ MASTER }}"when:- ansible_hostname=="k8s-master01"
- name: copy kube-proxy.conf to mastertemplate: src: config/kube-proxy.conf.j2dest: /etc/kubernetes/kube-proxy.confwhen:- inventory_hostname in groups.master
- name: copy kube-proxy.service to mastercopy: src: service/kube-proxy.servicedest: /lib/systemd/system/when:- inventory_hostname in groups.master
- name: start kube-proxy to mastersystemd:name: kube-proxystate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.master[root@ansible-server kubernetes-master]# vim tasks/install_automatic_completion_tool.yml
- name: install CentOS or Rocky bash-completion toolyum:name: bash-completionwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_hostname=="k8s-master01"
- name: install Ubuntu bash-completion toolapt:name: bash-completionforce: yeswhen:- ansible_distribution=="Ubuntu"- ansible_hostname=="k8s-master01"
- name: source completion bashshell: |"source <(kubectl completion bash)"echo "source <(kubectl completion bash)" >> ~/.bashrcwhen:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-master]# vim tasks/main.yml
- include: copy_etcd_cert.yml
- include: copy_kubernetes_file.yml
- include: create_kubernetes_cert.yml
- include: master_config.yml
- include: install_automatic_completion_tool.yml[root@ansible-server kubernetes-master]# cd ../../
[root@ansible-server ansible]# tree roles/kubernetes-master
roles/kubernetes-master
├── files
│   ├── bin
│   │   ├── kube-apiserver
│   │   ├── kube-controller-manager
│   │   ├── kubectl
│   │   ├── kubelet
│   │   ├── kube-proxy
│   │   └── kube-scheduler
│   ├── cfssl
│   ├── cfssljson
│   ├── config
│   │   └── 10-kubelet.conf
│   ├── service
│   │   ├── kubelet.service
│   │   ├── kube-proxy.service
│   │   └── kube-scheduler.service
│   └── yaml
│       └── bootstrap.secret.yaml
├── tasks
│   ├── copy_etcd_cert.yml
│   ├── copy_kubernetes_file.yml
│   ├── create_kubernetes_cert.yml
│   ├── install_automatic_completion_tool.yml
│   ├── main.yml
│   └── master_config.yml
├── templates
│   ├── config
│   │   ├── kubelet-conf.yml.j2
│   │   └── kube-proxy.conf.j2
│   ├── pki
│   │   ├── admin-csr.json.j2
│   │   ├── apiserver-csr.json.j2
│   │   ├── ca-config.json.j2
│   │   ├── ca-csr.json.j2
│   │   ├── front-proxy-ca-csr.json.j2
│   │   ├── front-proxy-client-csr.json.j2
│   │   ├── manager-csr.json.j2
│   │   └── scheduler-csr.json.j2
│   └── service
│       ├── kube-apiserver.service.j2
│       └── kube-controller-manager.service.j2
└── vars└── main.yml11 directories, 32 files[root@ansible-server ansible]# vim kubernetes_master_role.yml
---
- hosts: master:etcdroles:- role: kubernetes-master[root@ansible-server ansible]# ansible-playbook kubernetes_master_role.yml

12.2 验证master

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES    AGE     VERSION
k8s-master01.example.local   NotReady   <none>   3m38s   v1.23.6
k8s-master02.example.local   NotReady   <none>   3m37s   v1.23.6
k8s-master03.example.local   NotReady   <none>   3m37s   v1.23.6

13.部署node

13.1 安装node组件

[root@ansible-server ansible]# mkdir -p roles/kubernetes-node/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/kubernetes-node/
[root@ansible-server kubernetes-node]# ls
files  tasks  templates  vars[root@ansible-server kubernetes-node]# mkdir files/bin
[root@ansible-server kubernetes-node]# cp /data/ansible/roles/kubernetes-master/files/bin/{kubelet,kube-proxy} files/bin/
[root@ansible-server kubernetes-node]# ls files/bin/
kubelet  kube-proxy[root@ansible-server kubernetes-node]# vim tasks/copy_kubernetes_file.yaml
- name: copy kubernetes files to nodecopy:src: "bin/{{ item }}"dest: /usr/local/bin/mode: 0755loop:- kubelet- kube-proxywhen:- inventory_hostname in groups.node
- name: create /opt/cni/bin directoryfile:path: /opt/cni/binstate: directorywhen:- inventory_hostname in groups.node#下面NODE01、NODE02和NODE03的IP地址根据自己的更改
[root@ansible-server kubernetes-node]# vim vars/main.yml
ETCD_CERT:- etcd-ca-key.pem- etcd-ca.pem- etcd-key.pem- etcd.pemNODE01: 172.31.3.111
NODE02: 172.31.3.112
NODE03: 172.31.3.113[root@ansible-server kubernetes-node]# vim tasks/copy_etcd_cert.yaml
- name: create /etc/etcd/ssl directory for nodefile:path: /etc/etcd/sslstate: directorywhen:- inventory_hostname in groups.node
- name: transfer etcd-ca-key.pem file from etcd01 to node01synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ NODE01 }}"when:- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to node02synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ NODE02 }}"when:- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to node03synchronize:src: "/etc/etcd/ssl/{{ item }}"dest: /etc/etcd/ssl/mode: pullloop:"{{ ETCD_CERT }}"delegate_to: "{{ NODE03 }}"when:- ansible_hostname=="k8s-etcd01"
- name: create /etc/kubernetes/pki/etcd directoryfile:path: /etc/kubernetes/pki/etcdstate: directorywhen:- inventory_hostname in groups.node
- name: link etcd_ssl to kubernetes pkifile: src: "/etc/etcd/ssl/{{ item }}"dest: "/etc/kubernetes/pki/etcd/{{ item }}"state: linkloop:"{{ ETCD_CERT }}"when:- inventory_hostname in groups.node[root@ansible-server kubernetes-node]# vim vars/main.yml
...
NODE:- 172.31.3.111- 172.31.3.112- 172.31.3.113[root@ansible-server kubernetes-node]# vim tasks/copy_kubernetes_cert.yml 
- name: create /etc/kubernetes/pki directory to nodefile:path: /etc/kubernetes/pkistate: directorywhen:- inventory_hostname in groups.node
- name: transfer ca.pem file from mater01 to nodesynchronize:src: /etc/kubernetes/pki/ca.pemdest: /etc/kubernetes/pki/mode: pulldelegate_to: "{{ item }}"loop: "{{ NODE }}"when:- ansible_hostname=="k8s-master01"
- name: transfer ca-key.pem file from mater01 to nodesynchronize:src: /etc/kubernetes/pki/ca-key.pemdest: /etc/kubernetes/pki/mode: pulldelegate_to: "{{ item }}"loop: "{{ NODE }}"when:- ansible_hostname=="k8s-master01"
- name: transfer front-proxy-ca.pem file from mater01 to nodesynchronize:src: /etc/kubernetes/pki/front-proxy-ca.pemdest: /etc/kubernetes/pki/mode: pulldelegate_to: "{{ item }}"loop: "{{ NODE }}"when:- ansible_hostname=="k8s-master01"
- name: transfer bootstrap-kubelet.kubeconfig file from mater01 to nodesynchronize:src: /etc/kubernetes/bootstrap-kubelet.kubeconfigdest: /etc/kubernetes/mode: pulldelegate_to: "{{ item }}"loop: "{{ NODE }}"when:- ansible_hostname=="k8s-master01"[root@ansible-server kubernetes-node]# vim vars/main.yml
...
KUBE_DIRECTROY:- /etc/kubernetes/manifests/- /etc/systemd/system/kubelet.service.d- /var/lib/kubelet- /var/log/kubernetesCLUSTERDNS: 10.96.0.10
POD_SUBNET: 192.168.0.0/12[root@ansible-server kubernetes-node]# mkdir files/service
[root@ansible-server kubernetes-node]# vim files/service/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=
After=containerd.service
Requires=containerd.service[Service]
ExecStart=/usr/local/bin/kubeletRestart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target[root@ansible-server kubernetes-node]# mkdir files/config
[root@ansible-server kubernetes-node]# vim files/config/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS[root@ansible-server kubernetes-node]# mkdir templates/config
[root@ansible-server kubernetes-node]# vim templates/config/kubelet-conf.yml.j2
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- {{ CLUSTERDNS }}
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s[root@ansible-server kubernetes-node]# vim templates/config/kube-proxy.conf.j2
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5
clusterCIDR: {{ POD_SUBNET }}
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms[root@ansible-server kubernetes-node]# vim files/service/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=
After=network.target[Service]
ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.conf \--v=2Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target[root@ansible-server kubernetes-node]# vim tasks/node_config.yml
- name: create kubernetes directory to nodefile:path: "{{ item }}"state: directoryloop:"{{ KUBE_DIRECTROY }}"when:- inventory_hostname in groups.node
- name: copy kubelet.service to nodecopy:src: service/kubelet.servicedest: /lib/systemd/system/when:- inventory_hostname in groups.node
- name: copy 10-kubelet.conf to nodecopy: src: config/10-kubelet.conf.j2dest: /etc/systemd/system/kubelet.service.d/10-kubelet.confwhen:- inventory_hostname in groups.node
- name: copy kubelet-conf.yml to nodetemplate: src: config/kubelet-conf.yml.j2dest: /etc/kubernetes/kubelet-conf.ymlwhen:- inventory_hostname in groups.node
- name: start kubelet for nodesystemd:name: kubeletstate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.node
- name: transfer kube-proxy.kubeconfig files from master01 to nodesynchronize:src: /etc/kubernetes/kube-proxy.kubeconfigdest: /etc/kubernetes/mode: pulldelegate_to: "{{ item }}"loop:"{{ NODE }}"when:- ansible_hostname=="k8s-master01"
- name: copy kube-proxy.conf to nodetemplate: src: config/kube-proxy.conf.j2dest: /etc/kubernetes/kube-proxy.confwhen:- inventory_hostname in groups.node
- name: copy kube-proxy.service to nodecopy: src: service/kube-proxy.servicedest: /lib/systemd/system/when:- inventory_hostname in groups.node
- name: start kube-proxy to nodesystemd:name: kube-proxystate: startedenabled: yesdaemon_reload: yeswhen:- inventory_hostname in groups.node[root@ansible-server kubernetes-node]# vim tasks/main.yml
- include: copy_kubernetes_file.yaml
- include: copy_etcd_cert.yaml
- include: copy_kubernetes_cert.yml
- include: node_config.yml[root@ansible-server kubernetes-node]# cd ../../
[root@ansible-server ansible]# tree roles/kubernetes-node/
roles/kubernetes-node/
├── files
│   ├── bin
│   │   ├── kubelet
│   │   └── kube-proxy
│   ├── config
│   │   └── 10-kubelet.conf.j2
│   └── service
│       ├── kubelet.service
│       └── kube-proxy.service
├── tasks
│   ├── copy_etcd_cert.yaml
│   ├── copy_kubernetes_cert.yml
│   ├── copy_kubernetes_file.yaml
│   ├── main.yml
│   └── node_config.yml
├── templates
│   └── config
│       ├── kubelet-conf.yml.j2
│       └── kube-proxy.conf.j2
└── vars└── main.yml8 directories, 13 files[root@ansible-server ansible]# vim kubernetes_node_role.yml
---
- hosts: master:node:etcdroles:- role: kubernetes-node[root@ansible-server ansible]# ansible-playbook kubernetes_node_role.yml

13.2 验证node

[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS     ROLES    AGE   VERSION
k8s-master01.example.local   NotReady   <none>   12m   v1.23.6
k8s-master02.example.local   NotReady   <none>   12m   v1.23.6
k8s-master03.example.local   NotReady   <none>   12m   v1.23.6
k8s-node01.example.local     NotReady   <none>   5s    v1.23.6
k8s-node02.example.local     NotReady   <none>   5s    v1.23.6
k8s-node03.example.local     NotReady   <none>   5s    v1.23.6

14.安装Calico

14.1 安装calico

[root@ansible-server ansible]# mkdir -p roles/calico/{tasks,vars,templates}
[root@ansible-server ansible]# cd roles/calico
[root@ansible-server calico]# ls
tasks  templates  vars#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址,POD_SUBNET改成自己规划的容器网段
[root@ansible-server calico]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds
USERNAME: admin
PASSWORD: 123456
POD_SUBNET: 192.168.0.0/12[root@ansible-server calico]# cat templates/calico-etcd.yaml.j2
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see /
apiVersion: v1
kind: Secret
type: Opaque
metadata:name: calico-etcd-secretsnamespace: kube-system
data:# Populate the following with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# The keys below should be uncommented and the values populated with the base64# encoded contents of each file that would be associated with the TLS data.# Example command for encoding a file contents: cat <file> | base64 -w 0# etcd-key: null# etcd-cert: null# etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Configure this with the location of your etcd cluster.etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.etcd_ca: ""   # "/calico-secrets/etcd-ca"etcd_cert: "" # "/calico-secrets/etcd-cert"etcd_key: ""  # "/calico-secrets/etcd-key"# Typha is disabled.typha_service_name: "none"# Configure the backend to use.calico_backend: "bird"# Configure the MTU to use for workload interfaces and tunnels.# By default, MTU is auto-detected, and explicitly setting this field should not be required.# You can override auto-detection by providing a non-zero value.veth_mtu: "0"# The CNI network configuration to install on each node. The special# values in this config will be automatically populatedi_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","log_file_path": "/var/log/calico/cni/cni.log","etcd_endpoints": "__ETCD_ENDPOINTS__","etcd_key_file": "__ETCD_KEY_FILE__","etcd_cert_file": "__ETCD_CERT_FILE__","etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}},{"type": "bandwidth","capabilities": {"bandwidth": true}}]}---
# Source: calico/templates/calico-kube-controllers-rbac.yaml# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
rules:# Pods are monitored for changing labels.# The node controller monitors Kubernetes nodes.# Namespace and serviceaccount labels are used for policy.- apiGroups: [""]resources:- pods- nodes- namespaces- serviceaccountsverbs:- watch- list- get# Watch for changes to Kubernetes NetworkPolicies.- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system
------
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-node
rules:# The CNI plugin needs to get pods, nodes, and namespaces.- apiGroups: [""]resources:- pods- nodes- namespacesverbs:- get# EndpointSlices are used for Service-based network policy rule# enforcement.- apiGroups: ["discovery.k8s.io"]resources:- endpointslicesverbs:- watch - list- apiGroups: [""]resources:- endpoints- servicesverbs:# Used to discover service IPs for advertisement.- watch- list# Pod CIDR auto-detection on kubeadm needs access to config maps.- apiGroups: [""]resources:- configmapsverbs:- get- apiGroups: [""]resources:- nodes/statusverbs:# Needed for clearing NodeNetworkUnavailable flag.- patch---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodespec:nodeSelector:kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": .terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: docker.io/calico/cni:v3.21.4command: ["/opt/cni/bin/install"]envFrom:- configMapRef:# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.name: kubernetes-services-endpointoptional: trueenv:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# The location of the etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir- mountPath: /calico-secretsname: etcd-certssecurityContext:privileged: true# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: docker.io/calico/pod2daemon-flexvol:v3.21.4volumeMounts:- name: flexvol-driver-hostmountPath: /host/driversecurityContext:privileged: truecontainers:# Runs calico-node container on each Kubernetes node. This# container programs network policy and routes on each# host.- name: calico-nodeimage: docker.io/calico/node:v3.21.4envFrom:- configMapRef:# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.name: kubernetes-services-endpointoptional: trueenv:# The location of the etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Set noderef for node controller.- name: CALICO_K8S_NODE_REFvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Enable or Disable VXLAN on the default IP pool.- name: CALICO_IPV4POOL_VXLANvalue: "Never"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the VXLAN tunnel device.- name: FELIX_VXLANMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Set MTU for the Wireguard tunnel device.- name: FELIX_WIREGUARDMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.# - name: CALICO_IPV4POOL_CIDR#   value: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlifecycle:preStop:exec:command:- /bin/calico-node- -shutdownlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6timeoutSeconds: 10readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10timeoutSeconds: 10volumeMounts:# For maintaining CNI plugin API credentials.- mountPath: /host/etc/cni/net.dname: cni-net-dirreadOnly: false- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- mountPath: /calico-secretsname: etcd-certs- name: policysyncmountPath: /var/run/nodeagent# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the# parent directory.- name: sysfsmountPath: /sys/fs/# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.# If the host is known to mount that filesystem already then Bidirectional can be omitted.mountPropagation: Bidirectional- name: cni-log-dirmountPath: /var/log/calico/cnireadOnly: truevolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate- name: sysfshostPath:path: /sys/fs/type: DirectoryOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Used to access CNI logs.- name: cni-log-dirhostPath:path: /var/log/calico/cni# Mount in the etcd TLS secrets with mode 400.# See  name: etcd-certssecret:secretName: calico-etcd-secretsdefaultMode: 0400# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-kube-controllers.yaml
# See 
apiVersion: apps/v1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:# The controllers can only have a single active instance.replicas: 1selector:matchLabels:k8s-app: calico-kube-controllersstrategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersspec:nodeSelector:kubernetes.io/os: linuxtolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerspriorityClassName: system-cluster-critical# The controllers must run in the host network namespace so that# it isn't governed by policy that would prevent it from working.hostNetwork: truecontainers:- name: calico-kube-controllersimage: docker.io/calico/kube-controllers:v3.21.4env:# The location of the etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: policy,namespace,serviceaccount,workloadendpoint,nodevolumeMounts:# Mount in the etcd TLS secrets.- mountPath: /calico-secretsname: etcd-certslivenessProbe:exec:command:- /usr/bin/check-status- -lperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6timeoutSeconds: 10readinessProbe:exec:command:- /usr/bin/check-status- -rperiodSeconds: 10volumes:# Mount in the etcd TLS secrets with mode 400.# See  name: etcd-certssecret:secretName: calico-etcd-secretsdefaultMode: 0440---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system---# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evictapiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:maxUnavailable: 1selector:matchLabels:k8s-app: calico-kube-controllers---
# Source: calico/templates/calico-typha.yaml---
# Source: calico/templates/configure-canal.yaml---
# Source: calico/templates/kdd-crds.yaml#修改下面内容
[root@ansible-server calico]# grep "etcd_endpoints:.*" templates/calico-etcd.yaml.j2 etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"[root@ansible-server calico]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "{% for i in groups.etcd %}https://{{ hostvars[i].ansible_default_ipv4.address }}:2379{% if not loop.last %},{% endif %}{% endfor %}"#g' templates/calico-etcd.yaml.j2  [root@ansible-server calico]# grep "etcd_endpoints:.*" templates/calico-etcd.yaml.j2etcd_endpoints: {% for i in groups.etcd %}https://{{ hostvars[i].ansible_default_ipv4.address }}{% if not loop.last %},{% endif %}{% endfor %}	[root@ansible-server calico]# vim tasks/calico_file.yml
- name: copy calico-etcd.yaml filetemplate:src: calico-etcd.yaml.j2dest: /root/calico-etcd.yamlwhen:- ansible_hostname=="k8s-master01"[root@ansible-server calico]# vim tasks/config.yml
- name: get ETCD_KEY keyshell:cmd: cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'register: ETCD_KEYwhen:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd-key:.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (etcd-key:) null'replace: '\1 {{ ETCD_KEY.stdout }}'when:- ansible_hostname=="k8s-master01"
- name: get ETCD_CERT keyshell:cmd: cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'register: ETCD_CERTwhen:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd-cert:.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (etcd-cert:) null'replace: '\1 {{ ETCD_CERT.stdout }}'when:- ansible_hostname=="k8s-master01"
- name: get ETCD_CA keyshell:cmd: cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'when:- ansible_hostname=="k8s-master01"register: ETCD_CA
- name: Modify the ".*etcd-ca:.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (etcd-ca:) null'replace: '\1 {{ ETCD_CA.stdout }}'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_ca:.*" linereplace:path: /root/calico-etcd.yamlregexp: '(etcd_ca:) ""'replace: '\1 "/calico-secrets/etcd-ca"'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_cert:.*" linereplace:path: /root/calico-etcd.yamlregexp: '(etcd_cert:) ""'replace: '\1 "/calico-secrets/etcd-cert"'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_key:.*" linereplace:path: /root/calico-etcd.yamlregexp: '(etcd_key:) ""'replace: '\1 "/calico-secrets/etcd-key"'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*CALICO_IPV4POOL_CIDR.*" linereplace:path: /root/calico-etcd.yamlregexp: '# (- name: CALICO_IPV4POOL_CIDR)'replace: '\1'when:- ansible_hostname=="k8s-master01"
- name: Modify the ".*192.168.0.0.*" linereplace:path: /root/calico-etcd.yamlregexp: '#   (value:) "192.168.0.0/16"'replace: '  \1 "{{ POD_SUBNET }}"'when:- ansible_hostname=="k8s-master01"
- name: Modify the "image:" linereplace:path: /root/calico-etcd.yamlregexp: '(.*image:) docker.io/calico(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}/google_containers\2'when:- ansible_hostname=="k8s-master01"[root@ansible-server calico]# vim tasks/download_images.yml
- name: get calico versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' calico-etcd.yamlregister: CALICO_VERSIONwhen:- ansible_hostname=="k8s-master01"
- name: install CentOS or Rocky expect packageyum:name: expectwhen:- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")- ansible_hostname=="k8s-master01"
- name: install Ubuntu expect packageapt:name: expectforce: yeswhen:- ansible_distribution=="Ubuntu"- ansible_hostname=="k8s-master01"
- name: download calico imageshell:cmd: |{% for i in CALICO_VERSION.stdout_lines %}ctr images pull --all-platforms registry-beijing.aliyuncs/raymond9/{{ i }}ctr images tag registry-beijing.aliyuncs/raymond9/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}ctr images remove registry-beijing.aliyuncs/raymond9/{{ i }}expect <<EOFspawn ctr images push --plain-http -u {{ USERNAME }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}expect "Password:" { send "{{ PASSWORD }}\n";exp_continue }EOF{% endfor %}when:- ansible_hostname=="k8s-master01"[root@ansible-server calico]# vim tasks/install_calico.yml
- name: install calicoshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f calico-etcd.yaml"when:- ansible_hostname=="k8s-master01"[root@ansible-server calico]# vim tasks/main.yml
- include: calico_file.yml
- include: config.yml
- include: download_images.yml
- include: install_calico.yml[root@ansible-server calico]# cd ../../
[root@ansible-server ansible]# tree roles/calico
roles/calico
├── tasks
│   ├── calico_file.yml
│   ├── config.yml
│   ├── download_images.yml
│   ├── install_calico.yml
│   └── main.yml
├── templates
│   └── calico-etcd.yaml.j2
└── vars└── main.yml3 directories, 7 files[root@ansible-server ansible]# vim calico_role.yml 
---
- hosts: master:etcdroles:- role: calico[root@ansible-server ansible]# ansible-playbook calico_role.yml 

14.2 验证calico

[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-786596c789-gkkfk   1/1     Running   0          77s
calico-node-4d5hz                          1/1     Running   0          77s
calico-node-4v697                          1/1     Running   0          77s
calico-node-g5nk9                          1/1     Running   0          77s
calico-node-ljvlp                          1/1     Running   0          77s
calico-node-vttcj                          1/1     Running   0          77s
calico-node-wwhrl                          1/1     Running   0          77s[root@k8s-master01 ~]# kubectl get nodes
NAME                         STATUS   ROLES    AGE   VERSION
k8s-master01.example.local   Ready    <none>   39m   v1.23.6
k8s-master02.example.local   Ready    <none>   39m   v1.23.6
k8s-master03.example.local   Ready    <none>   39m   v1.23.6
k8s-node01.example.local     Ready    <none>   27m   v1.23.6
k8s-node02.example.local     Ready    <none>   27m   v1.23.6
k8s-node03.example.local     Ready    <none>   27m   v1.23.6

15.安装 CoreDNS

15.1 安装 CoreDNS

[root@ansible-server ansible]# mkdir -p roles/coredns/{tasks,templates,vars}
[root@ansible-server ansible]# cd roles/coredns/
[root@ansible-server coredns]# ls
tasks  templates  vars#下面CLUSTERDNS改成自己规划的service网段的第10个IP地址,HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server coredns]# vim vars/main.yml
CLUSTERDNS: 10.96.0.10
HARBOR_DOMAIN: harbor.raymonds
USERNAME: admin
PASSWORD: 123456[root@ansible-server coredns]# cat templates/coredns.yaml.j2 
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
rules:- apiGroups:- ""resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: "CoreDNS"
spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: "CriticalAddonsOnly"operator: "Exists"nodeSelector:kubernetes.io/os: linuxaffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: ["kube-dns"]topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: coredns/coredns:1.8.6imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ "-conf", "/etc/coredns/Corefile" ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: "9153"prometheus.io/scrape: "true"labels:k8s-app: kube-dnskubernetes.io/cluster-service: "true"kubernetes.io/name: "CoreDNS"
spec:selector:k8s-app: kube-dnsclusterIP: 10.96.0.10ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP[root@ansible-server coredns]# vim templates/coredns.yaml.j2
...
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loop ##将loop插件直接删除,避免内部循环reloadloadbalance}
...
spec:selector:k8s-app: kube-dnsclusterIP: {{ CLUSTERDNS }} #修改这里
...[root@ansible-server coredns]# vim tasks/coredns_file.yml
- name: copy coredns.yaml filetemplate:src: coredns.yaml.j2dest: /root/coredns.yamlwhen:- ansible_hostname=="k8s-master01"[root@ansible-server coredns]# vim tasks/config.yml
- name: Modify the "image:" linereplace:path: /root/coredns.yamlregexp: '(.*image:) coredns(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}\2'[root@ansible-server coredns]# vim tasks/download_images.yml
- name: get coredns versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' coredns.yamlregister: COREDNS_VERSION
- name: download coredns imageshell:cmd: |{% for i in COREDNS_VERSION.stdout_lines %}ctr images pull --all-platforms registry.aliyuncs/google_containers/{{ i }}ctr images tag registry.aliyuncs/google_containers/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}ctr images remove registry.aliyuncs/google_containers/{{ i }}expect <<EOFspawn ctr images push --plain-http -u {{ USERNAME }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}expect "Password:" { send "{{ PASSWORD }}\n";exp_continue }EOF{% endfor %}[root@ansible-server coredns]# vim tasks/install_coredns.yml
- name: install corednsshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f coredns.yaml"[root@ansible-server coredns]# vim tasks/main.yml
- include: coredns_file.yml
- include: config.yml
- include: download_images.yml
- include: install_coredns.yml[root@ansible-server coredns]# cd ../../
[root@ansible-server ansible]# tree roles/coredns/
roles/coredns/
├── tasks
│   ├── config.yml
│   ├── coredns_file.yml
│   ├── download_images.yml
│   ├── install_coredns.yml
│   └── main.yml
├── templates
│   └── coredns.yaml.j2
└── vars└── main.yml3 directories, 7 files[root@ansible-server ansible]# vim coredns_role.yml
---
- hosts: master01roles:- role: coredns[root@ansible-server ansible]# ansible-playbook coredns_role.yml

15.2 验证coredns

[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep coredns
coredns-66d5dc5c47-95267                   1/1     Running   0          2m3s

16.安装Metrics

16.1 安装metrics

[root@ansible-server ansible]# mkdir -p roles/metrics/{files,vars,tasks}
[root@ansible-server ansible]# cd roles/metrics/
[root@ansible-server metrics]# ls
files  tasks  vars#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server metrics]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds
USERNAME: admin
PASSWORD: 123456[root@ansible-server metrics]# cat files/components.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader
rules:
- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:
- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15simage: k8s.gcr.io/metrics-server/metrics-server:v0.5.2imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100#修改下面内容:
[root@k8s-master01 ~]# vim components.yaml
...spec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s
#添加下面内容        - --kubelet-insecure-tls- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem- --requestheader-username-headers=X-Remote-User- --requestheader-group-headers=X-Remote-Group- --requestheader-extra-headers-prefix=X-Remote-Extra-  
...volumeMounts:- mountPath: /tmpname: tmp-dir
#添加下面内容 - name: ca-sslmountPath: /etc/kubernetes/pki
...volumes:- emptyDir: {}name: tmp-dir
#添加下面内容 - name: ca-sslhostPath:path: /etc/kubernetes/pki [root@ansible-server metrics]# vim tasks/metrics_file.yml
- name: copy components.yaml filecopy:src: components.yamldest: /root/components.yaml[root@ansible-server metrics]# vim tasks/config.yml
- name: Modify the "image:" linereplace:path: /root/components.yamlregexp: '(.*image:) k8s.gcr.io/metrics-server(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}/google_containers\2'[root@ansible-server metrics]# vim tasks/download_images.yml
- name: get metrics versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' components.yamlregister: METRICS_VERSION
- name: download metrics imageshell:cmd: |{% for i in METRICS_VERSION.stdout_lines %}ctr images pull --all-platforms registry.aliyuncs/google_containers/{{ i }}ctr images tag registry.aliyuncs/google_containers/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}ctr images remove registry.aliyuncs/google_containers/{{ i }}expect <<EOFspawn ctr images push --plain-http -u {{ USERNAME }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}expect "Password:" { send "{{ PASSWORD }}\n";exp_continue }EOF{% endfor %}[root@ansible-server metrics]# vim tasks/install_metrics.yml
- name: install metricsshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f components.yaml"[root@ansible-server metrics]# vim tasks/main.yml 
- include: metrics_file.yml
- include: config.yml
- include: download_images.yml
- include: install_metrics.yml[root@ansible-server metrics]# cd ../../
[root@ansible-server ansible]# tree roles/metrics/
roles/metrics/
├── files
│   └── components.yaml
├── tasks
│   ├── config.yml
│   ├── download_images.yml
│   ├── install_metrics.yml
│   ├── main.yml
│   └── metrics_file.yml
└── vars└── main.yml3 directories, 7 files[root@ansible-server ansible]# vim metrics_role.yml
---
- hosts: master01roles:- role: metrics[root@ansible-server ansible]# ansible-playbook metrics_role.yml

16.2 验证metrics

[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep metrics
metrics-server-6f6d4cd59-kbxw6             1/1     Running   0          74s[root@k8s-master01 ~]# kubectl top node 
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01.example.local   101m         5%     1759Mi          48%       
k8s-master02.example.local   83m          4%     1484Mi          41%       
k8s-master03.example.local   119m         5%     1509Mi          41%       
k8s-node01.example.local     68m          3%     771Mi           21%       
k8s-node02.example.local     70m          3%     824Mi           22%       
k8s-node03.example.local     74m          3%     818Mi           22% 

17.安装dashboard

17.1 安装dashboard

[root@ansible-server ansible]# mkdir -p roles/dashboard/{tasks,vars,files,templates}
[root@ansible-server ansible]# cd roles/dashboard/
[root@ansible-server dashboard]# ls
files  tasks  templates  vars#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server dashboard]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds
USERNAME: admin
PASSWORD: 123456
NODEPORT: 30005[root@ansible-server dashboard]# cat templates/recommended.yaml.j2
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     .0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboardtype: NodePort---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.4.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.7ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}[root@ansible-server dashboard]# vim templates/recommended.yaml.j2
...
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePort #添加这行ports:- port: 443targetPort: 8443nodePort: {{ NODEPORT }} #添加这行selector:k8s-app: kubernetes-dashboard
...[root@ansible-server dashboard]# vim files/admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kube-system[root@ansible-server dashboard]# vim tasks/dashboard_file.yml
- name: copy recommended.yaml filetemplate:src: recommended.yaml.j2dest: /root/recommended.yaml
- name: copy admin.yaml filecopy:src: admin.yamldest: /root/admin.yaml[root@ansible-server dashboard]# vim tasks/config.yml
- name: Modify the "image:" linereplace:path: /root/recommended.yamlregexp: '(.*image:) kubernetesui(/.*)'replace: '\1 {{ HARBOR_DOMAIN }}/google_containers\2'[root@ansible-server dashboard]# vim tasks/download_images.yml
- name: get dashboard versionshell:chdir: /rootcmd: awk -F "/"  '/image:/{print $NF}' recommended.yamlregister: DASHBOARD_VERSION
- name: download dashboard imageshell:cmd: |{% for i in DASHBOARD_VERSION.stdout_lines %}ctr images pull --all-platforms registry.aliyuncs/google_containers/{{ i }}ctr images tag registry.aliyuncs/google_containers/{{ i }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}ctr images remove registry.aliyuncs/google_containers/{{ i }}expect <<EOFspawn ctr images push --plain-http -u {{ USERNAME }} {{ HARBOR_DOMAIN }}/google_containers/{{ i }}expect "Password:" { send "{{ PASSWORD }}\n";exp_continue }EOF{% endfor %}[root@ansible-server dashboard]# vim tasks/install_dashboard.yml
- name: install dashboardshell:chdir: /rootcmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f recommended.yaml -f admin.yaml"[root@ansible-server dashboard]# vim tasks/main.yml
- include: dashboard_file.yml
- include: config.yml
- include: download_images.yml
- include: install_dashboard.yml[root@ansible-server dashboard]# cd ../../
[root@ansible-server ansible]# tree roles/dashboard/
roles/dashboard/
├── files
│   └── admin.yaml
├── tasks
│   ├── config.yml
│   ├── dashboard_file.yml
│   ├── download_images.yml
│   ├── install_dashboard.yml
│   └── main.yml
├── templates
│   └── recommended.yaml.j2
└── vars└── main.yml4 directories, 8 files[root@ansible-server ansible]# vim dashboard_role.yml
---
- hosts: master01roles:- role: dashboard[root@ansible-server ansible]# ansible-playbook dashboard_role.yml

17.2 登录dashboard

https://172.31.3.101:30005

查看token值:

[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-vh65j
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 02afbef9-872f-4ab6-9741-a94e19df2d54Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1411 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkJTRk9vTmt0OFhYWmhPaWkzbklvRUtWd1psU2xWbzlzRWhWY0hnNEVLYm8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZoNjVqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwMmFmYmVmOS04NzJmLTRhYjYtOTc0MS1hOTRlMTlkZjJkNTQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.NnZ7gsK4XR7V_5OWVMeDtVjit9_SzZaV3AUYnIF8sJzKM7-WjnC6d4P9WwlbG9Tjx_tSVt4f0er9zznyQjX36tPsJ4fI6pnjMkgotddnR_PpIpnsgtMzhXtLwTjX35RrTuw4MoFg_rjnCUOs5xl5JHXPbYfwT0yVBglV4a62xnPcRkwowSGUwTlFbhqUebFLGXHIVCphHmCAWDke5u4nyTA5RvyYLC_bmbRAw8KxJNNseu3g7JQkHYQ5-aA1pD8RJuvAhE5StCbuYqRZNWK36qSE_1bE6lpl6feaEM9-8ZI3_FNi7_gtFaVItQFO-BchoXsfYYfAGCRGPEdhAqxxCQ

更多推荐

a37.ansible 生产实战案例

本文发布于:2024-03-06 16:59:03,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1715860.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:实战   案例   ansible

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!