搭建Openstack环境以及Openstack认证服务
Openstack之glance镜像服务、nova计算服务
Openstack之neutron网络服务、 启动一个实例
Openstack之dashboard服务、云主机管理、cinder块存储服务
Dashboard仪表盘服务
Dashboard(horizon)是一个web接口,使得云平台管理员以及用户可以管理不同的Openstack资源以及服务。
安装和配置
1. 安装软件包
[root@controller ~]# yum install openstack-dashboard -y
2. 编辑文件 /etc/openstack-dashboard/local_settings
[root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller" # 在 controller 节点上配置仪表盘以使用 OpenStack 服务
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST #启用第3版认证API
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 通过仪表盘创建的用户默认角色配置为 user
ALLOWED_HOSTS = ['*', ] # 允许所有主机访问仪表板
# 配置 memcached 会话存储服务
#CACHES = {
# 'default': {
# 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
# },
#}
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
# 启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 通过仪表盘创建用户时的默认域配置为 default
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
# 配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
# 我们现在用的是公网,这些都不支持,所以先设为False
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
# 可以选择性地配置时区
TIME_ZONE = "Asia/Shanghai"
3. 重启web服务器以及会话存储服务
[root@controller ~]# systemctl restart httpd.service memcached.service
验证操作
网页测试:
http://controller/dashboard
将语言设置为中文
图形化管理云主机
删除云主机
切换admin用户,demo没有权限
1. 把之前建的云主机删掉
2. 删除网络
点击provider
- 需要先删除子网
再删除外部网络
图形化创建云主机
1. 创建网络
点击创建网络
名称自定义,设置为public
选择项目:admin
网络类型:Flat
物理网络是他的底层网络:provider
- 创建子网
点击public
点击子网详情
点击已创建
退出,demo用户登陆
2. 创建云主机
点击创建云主机
默认普通用户配额是10个云主机,不能超过
源:选择哪个镜像,云主机从哪个镜像开启
网络使用公网
点击启动实例
3. 查看云主机
- 控制台查看云主机
打开控制台查看刚才创建的云主机详情
分配到ip:172.25.4.1 - SSH连接云主机
免密登录
查看网络拓扑
网络隔离
在生产环境中,大家用一个公网不安全。不同的租户需要网络隔离,可以配置一个私有网络,不同的租户在不同网段,加路由可以访问别人
1. 控制节点私有网络配置
配置控制节点的公有网络时,已经安装组件
- 配置服务组件
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2 # 启用Modular Layer 2 (ML2)插件,路由服务和重叠的IP地址
service_plugins = router
allow_overlapping_ips = True
backend = rabbit # 配置 “RabbitMQ” 消息队列的连接
auth_strategy = keystone
notify_nova_on_port_status_changes = True # 配置网络服务来通知计算节点的网络拓扑变化
notify_nova_on_port_data_changes = True
- 配置 Modular Layer 2 (ML2) 插件
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan # 启用flat,VLAN以及VXLAN网络
tenant_network_types = vxlan # 启用VXLAN私有网络
mechanism_drivers = linuxbridge,l2population # 启用Linuxbridge和layer-2机制
extension_drivers = port_security # 启用端口安全扩展驱动
[ml2_type_vxlan] # 为私有网络配置VXLAN网络识别的网络范围
vni_ranges = 1:1000
- 配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan] # 启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population
enable_vxlan = True
local_ip = 172.25.4.1 # 处理覆盖网络的底层物理网络接口的IP地址,即控制节点的管理网络的IP地址
l2_population = True
- 配置layer-3代理
Layer-3代理为私有虚拟网络提供路由和NAT服务
[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT] # 配置Linuxbridge接口驱动和外部网络网桥
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
- 重启计算服务
[root@controller ~]# systemctl restart neutron-server.service neutron-linuxbridge-agent.service
- 启动Linuxbridge代理并配置它开机自启动
[root@controller ~]# systemctl enable --now neutron-l3-agent.service
- 验证
[root@controller ~]# . admin-openrc
[root@controller ~]# neutron agent-list
+--------------+--------------+------------+-------------------+-------+----------------+--------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------+--------------+------------+-------------------+-------+----------------+--------------+
| 2e788a9d- | Linux bridge | controller | | :-) | True | neutron- |
| 1c23-43c1 | agent | | | | | linuxbridge- |
| -876f- | | | | | | agent |
| 91f6d3e81075 | | | | | | |
| 44757567-6f8 | Metadata | controller | | :-) | True | neutron- |
| 1-4168-9248- | agent | | | | | metadata- |
| 79294fe628ff | | | | | | agent |
| 7e44b6b0-f4f | DHCP agent | controller | nova | :-) | True | neutron- |
| 8-4334-ab0e- | | | | | | dhcp-agent |
| fc8d423000bc | | | | | | |
| a828635d- | L3 agent | controller | nova | :-) | True | neutron-l3-a |
| e5e6-440b- | | | | | | gent |
| 802a- | | | | | | |
| 2fa12ba902a6 | | | | | | |
| fe0345c5-3cf | Linux bridge | compute1 | | :-) | True | neutron- |
| 4-42a0-86d4- | agent | | | | | linuxbridge- |
| ca2b454b73e4 | | | | | | agent |
+--------------+--------------+------------+-------------------+-------+----------------+--------------+
2. 计算节点私有网络配置
- 配置Linuxbridge代理
[root@compute1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan] # 启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population
enable_vxlan = True
local_ip = 172.25.4.2 # 控制节点的管理网络的IP地址
l2_population = True
- 重启Linuxbridge代理服务
[root@compute1 ~]# systemctl restart neutron-linuxbridge-agent.service
- 编辑/etc/openstack-dashboard/local_settings文件
之前在配置公网时,把local_settings
文件里openstack网络服务关闭了,现在需要打开
[root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': True,
'enable_ha_router': True,
'enable_lb': True,
'enable_firewall': True,
'enable_vpn': True,
'enable_fip_topology_check': True,
- 重启服务
[root@controller ~]# systemctl restart httpd memcached.service
3. 图形化编辑网络
申请一个公网ip,如何绑定到云主机上
切换admin
点击public
点击右上角编辑网络
把public网络作为外部网络
切换demo用户
网络标签已经发生变化
普通用户创建私有网络
点击创建网络
网关不用设置,会分一个
点击已创建
普通用户的私有网络已创建
可以看到,现在公网和私有网络不通
添加路由
点击新建路由
这就叫sdn软件定义网络
软件定义网络(Software Defined Network,SDN)是一种新型网络创新架构,是网络虚拟化的一种实现方式。其核心技术OpenFlow通过将网络设备的控制面与数据面分离开来,从而实现了网络流量的灵活控制,使网络作为管道变得更加智能,为核心网络及应用的创新提供了良好的平台
路由已添加!!!
点击路由
点击router1
点击接口
点击提交
查看网络拓扑
路由器上多了一个接口,内网数据可以出去到达外网
再创建一个云主机,使用私有网络
创建云主机
vm2和vm1一样只是使用的网络不同
可以在控制台看到,分配到ip:10.0.0.3
此时云主机的DNS是随便分配的,无法解析百度
可以用sudo的方式加入nameserver 114.114.114.114,这样就可以ping通百度
公网一端最多只能访问到路由器,无法访问私有网络
如何访问私有网络呢?
点击绑定浮动id
点击分配浮动ip,会从public外网里面分配一个ip地址
点击分配ip
会分配到172.25.4.103
[root@controller ~]# ssh cirros@172.25.4.103 #免密登录
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:8b:72:87 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe8b:7287/64 scope link
valid_lft forever preferred_lft forever
这样外部网络可以直接访问云主机
封装云主机镜像
使用本地镜像安装一个5G的虚拟机
根分区分5G
添加一个ip,方便ssh连接操作
ip addr add 172.25.4.200/24
配置yum源
[roo3t@localhost ~]# vi /etc/yum.repos.d/dvd.repo
[dvd]
name=iso
baseurl=http://172.25.4.250/iso
gpgcheck=0
[cloud]
name=cloud-init
baseurl=http://172.25.4.250/rhel7
gpgcheck=0
[root@localhost ~]# yum repolist
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
cloud | 2.9 kB 00:00
dvd | 4.3 kB 00:00
cloud/primary_db | 27 kB 00:00
repo id repo name status
cloud cloud-init 27
dvd iso 5,152
repolist: 5,179
[root@localhost ~]# yum install -y acpid # 高级配置和电源管理接口
[root@localhost ~]# systemctl enable --now acpid
[root@localhost ~]# yum install -y cloud-init
[root@localhost ~]# yum install -y cloud-utils-growpart # 拉伸根分区
[root@localhost ~]# cd /etc/cloud/
[root@localhost cloud]# vi cloud.cfg
[root@localhost cloud]# echo "NOZEROCONF=yes" >> /etc/sysconfig/network
[root@localhost cloud]# ls
cloud.cfg cloud.cfg.d templates
[root@localhost cloud]# cd /boot/grub2/
[root@localhost grub2]# ls
device.map fonts grub.cfg grubenv i386-pc locale
输出云主机的引导日志
[root@localhost grub2]# vi grub.cfg #如果想看云主机日志
linux16 /boot/vmlinuz-3.10.0-957.el7.x86_64 root=UUID=57ac63cd-5061-4462-a70f-14d264bc2cfe ro LANG=zh_CN.UTF-8 console=tty0 console=ttyS0,115200n8
关闭火墙和selinux
[root@localhost grub2]# systemctl stop firewalld
[root@localhost grub2]# systemctl disable firewalld
[root@localhost grub2]# vi /etc/sysconfig/selinux
SELINUX=disabled
ip动态分配
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=dhcp
DEVICE=eth0
ONBOOT=yes
[root@localhost ~]# poweroff
[root@foundation4 ~]# cd /var/lib/libvirt/images/
[root@foundation4 images]# ls
base.qcow2 vm1 vm11 vm12.qcow2 vm2 vm4 vm6 vm8
demo.qcow2 vm10 vm12 vm13 vm3 vm5 vm7 vm9
[root@foundation4 images]# virt-sysprep -d demo #清理母盘
[root@foundation4 images]# virt-sparsify --compress demo.qcow2 /var/www/html/demo.qcow2
# 压缩demo.qcow2到 /var/www/html Apache默认发布目录
[root@foundation4 images]# cd /var/www/html/
[root@foundation4 html]# ls
3000 demo.qcow2 docker-ce index.html iso mitaka rhel7
[root@foundation4 html]# du -h demo.qcow2
522M demo.qcow2
[root@controller ~]# ssh cloud-user@172.25.4.104
[cloud-user@vm3 ~]$
云主机控制台日志
admin用户登陆
上传云主机镜像
点击创建镜像
企业7设计最小内存512
点击创建镜像
创建云主机类型
点击创建云主机类型
ID:已经有5个了,所以这个是6,如果是auto的话会自动分配,会生成一个uuid,比较麻烦
demo用户登录
新建一个云主机vm3
点击创建云主机
云主机名称设置为vm3
镜像使用demo
flavor使用m2.mano
使用私有网络
点击启动实例
云主机vm3已创建!!!
添加浮动ip
cinder块存储服务
OpenStack块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷类型的功能
安装并配置控制节点
1. 创建数据库
- 用数据库连接客户端以 root 用户连接到数据库服务器
[root@controller ~]# mysql -p
- 创建 cinder 数据库
MariaDB [(none)]> CREATE DATABASE cinder;
- 允许 cinder 数据库合适的访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
-> IDENTIFIED BY 'cinder';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
2. 获得 admin 凭证来获取只有管理员能执行的命令的访问权限
[root@controller ~]# . admin-openrc
3. 创建服务证书
- 创建一个 cinder 用户
[root@controller ~]# openstack user create --domain default --password cinder cinder
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 2cc353d733a74c629c0c7db9327e540a |
| enabled | True |
| id | 3c9bbec6abcf4f32a259068d562fbb3a |
| name | cinder |
+-----------+----------------------------------+
- 添加 admin 角色到 cinder 用户上
[root@controller ~]# openstack role add --project service --user cinder admin
- 创建 cinder 和 cinderv2 服务实体
[root@controller ~]# openstack service create --name cinder \
> --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 95eaa080a85e4168b72da7faeb58aa2a |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2 \
> --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 205b16b5d18b4cf185bd7fa6436c6168 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
4. 创建块设备存储服务的 API 入口点
[root@controller ~]# openstack endpoint create --region RegionOne \
> volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | e25930519d364e82a83d20367a80a988 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 95eaa080a85e4168b72da7faeb58aa2a |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | bc6e79852540402893833c018dcfb6fd |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 95eaa080a85e4168b72da7faeb58aa2a |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 9f19a0ace1da48f196e54a65e38e26fb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 95eaa080a85e4168b72da7faeb58aa2a |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | bc45dea72a4b42d28252aded88c5820e |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 205b16b5d18b4cf185bd7fa6436c6168 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 80ec65ecc1574101ab99e3fd09abed6a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 205b16b5d18b4cf185bd7fa6436c6168 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | c7e8731460da44b4abb4edb0d093bacc |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 205b16b5d18b4cf185bd7fa6436c6168 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
安全并配置组件
1. 安装软件包
[root@controller ~]# yum install openstack-cinder -y
2. 编辑 /etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit # 配置 “RabbitMQ” 消息队列访问
auth_strategy = keystone # 配置认证服务访问
my_ip = 172.25.4.1 # 使用控制节点的管理接口的IP 地址
[database] # 配置数据库访问
connection = mysql+pymysql://cinder:cinder@controller/cinder
[oslo_messaging_rabbit] # 配置 “RabbitMQ” 消息队列访问
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
[keystone_authtoken] # 配置认证服务访问
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[oslo_concurrency] # 配置锁路径
lock_path = /var/lib/cinder/tmp
3. 初始化块设备服务的数据库
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
配置计算节点以使用块设备存储
[root@controller ~]# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
完成安装
- 重启计算API 服务
[root@controller ~]# systemctl restart openstack-nova-api.service
- 启动块设备存储服务,并将其配置为开机自启
[root@controller ~]# systemctl enable --now openstack-cinder-api.service openstack-cinder-scheduler.service
安装并配置一个存储节点
新开一台虚拟机
1. 配置环境
- 修改主机名
[root@server3 ~]# hostnamectl set-hostname block1- 各个节点做解析
[root@block1 ~]# vim /etc/hosts
172.25.4.1 controller
172.25.4.2 compute1
172.25.4.3 block1- 配置yum源
直接从控制节点远程复制
[root@controller ~]# scp /etc/yum.repos.d/openstack.repo block1:/etc/yum.repos.d/
2. 安装支持的工具包
- 安装 LVM 包
[root@block1 ~]# yum install lvm2
- 启动LVM的metadata服务并且设置该服务随系统启动
[root@block1 ~]# systemctl enable lvm2-lvmetad.service
3. 给block1加一个20G的虚拟磁盘
[root@block1 ~]# fdisk -l /dev/vdb
Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
4. 创建LVM 物理卷 /dev/sdb
[root@block1 ~]# pvcreate /dev/vdb
Physical volume "/dev/vdb" successfully created.
5. 创建 LVM 卷组 cinder-volumes
[root@block1 ~]# vgcreate cinder-volumes /dev/vdb
Volume group "cinder-volumes" successfully created
6. 添加一个过滤器,只接受/dev/sdb
设备,拒绝其他所有设备
[root@block1 ~]# vim /etc/lvm/lvm.conf
filter = [ "a/vda/", "a/vdb/", "r/.*/"]
安全并配置组件
1. 安装软件包
[root@block1 ~]# yum install openstack-cinder targetcli python-keystone -y
2. 编辑 /etc/cinder/cinder.conf
[root@block1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 172.25.4.3
enabled_backends = lvm # 启用 LVM 后端
glance_api_servers = http://controller:9292 # 配置镜像服务 API 的位置
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[lvm] # 配置LVM后端以LVM驱动结束,卷组cinder-volumes,iSCSI 协议和正确的 iSCSI服务
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
完成安装
启动块存储卷服务及其依赖的服务,并将其配置为随系统启动
[root@block1 ~]# systemctl enable --now openstack-cinder-volume.service target.service
验证
[root@controller ~]# . admin-openrc
[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2020-09-30T11:13:24.000000 | - |
| cinder-volume | block1@lvm | nova | enabled | up | 2020-09-30T11:13:29.000000 | - |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
mkfs.xfs /dev/vdb
mkdir /data
mount /dev/vdb /data
df -h /data
更多推荐
Openstack之dashboard服务、云主机管理、cinder块存储服务
发布评论