admin管理员组

文章数量:1627753

概述

本文描述在单个主机上(不是RAC)GI 19c的安装。
Oracle数据库软件19c已安装,但未创建任何数据库。参见这篇文章。
主机为Oracle Linux 7,主机上已安装先决条件包(oracle-database-preinstall-19c),数据库软件用户为oracle,GI准备用grid用户安装。
安装GI要求主机内存至少8G。
Grid Infrastructure以下简称GI。

准备

创建GI用户

*** 这一部分非常重要,不要漏任何group,否则在安装过程中会报错,基本都是组没赋完整 ***

参考文档第6章,GI的安装需要grid用户和以下组:

# groupadd -g 54321 oinstall
# groupadd -g 54322 dba
# groupadd -g 54323 oper
# groupadd -g 54324 backupdba
# groupadd -g 54325 dgdba
# groupadd -g 54326 kmdba
# groupadd -g 54327 asmdba
# groupadd -g 54328 asmoper
# groupadd -g 54329 asmadmin
# groupadd -g 54330 racdba

由于我们先安装了先决条件包,大部分的group已经安装:

$ tail /etc/group
...
oinstall:x:54321:oracle
dba:x:54322:oracle
oper:x:54323:oracle
backupdba:x:54324:oracle
dgdba:x:54325:oracle
kmdba:x:54326:oracle
racdba:x:54330:oracle

只剩下54327-53329三个组,因此我们将这几个group补齐:

groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin

接下来创建GI用户,安装手册中提到:

The Grid user must be a member of the OSASM group (asmadmin) and the OSDBA for ASM group (asmdba).

安装手册中又提到:

For Oracle Restart installations, to successfully install Oracle Database, ensure that the grid user is a member of the racdba group.

这句话的意思是说,如果如果在Oracle Restart环境下安装Oracle数据库,GI用户必须有racdba的权限。

执行命令如下:

useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba,racdba grid

验证:

$ id grid
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54327(asmdba),54329(asmadmin)

设置口令:

passwd grid

创建目录结构

GI用户也有自己的ORACLE_HOME和ORACLE_BASE。
ORACLE_HOME设置为/u01/app/19.0.0/grid,比oracle用户的/u01/app/oracle/product/12.2.0.1/dbhome_1简洁;ORACLE_BASE设为/u01/app/grid,和oracle用户的/u01/app/oracle类似。

# 使用root用户执行
mkdir -p  /u01/app/19.0.0/grid
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01/app/19.0.0/grid
chown -R grid:oinstall /u01/app/grid

下载软件

从OTN下载。文件名为linuxx64_12201_grid_home.zip,约2.8G。

安装和配置GI软件

本节操作均使用GI用户grid。

设置环境变量

在~grid/.bash_profile中添加:

export ORACLE_HOME=/u01/app/19.0.0/grid
export ORACLE_BASE=/u01/app/grid
export PATH=$ORACLE_HOME/bin:$PATH

安装

在12.2之后,GI的安装就是解压,目标是预设的ORACLE_HOME目录。使用grid用户解压。安装完成后大约7G。解压需要近5分钟。

$ cd $ORACLE_HOME
$ unzip -q /vagrant/LINUX.X64_193000_grid_home.zip

存储准备

建立两个12G的盘,这两个盘是在VirtualBox中分配的动态磁盘,并挂载到主机。主机认到磁盘为sdc和sdd:

# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdd                   8:48   0   12G  0 disk
sdb                   8:16   0 15.6G  0 disk
sdc                   8:32   0   12G  0 disk
sda                   8:0    0 36.5G  0 disk
├─sda2                8:2    0   36G  0 part
│ ├─vg_main-lv_swap 252:1    0    4G  0 lvm  [SWAP]
│ └─vg_main-lv_root 252:0    0   32G  0 lvm  /
└─sda1                8:1    0  500M  0 part /boot

格式化磁盘,输入的指令为n,p,回车,回车,回车,w

# fdisk /dev/sdc
# fdisk /dev/sdd

可以看到分区sdc1和sdd1建立成功:

# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdd                   8:48   0   12G  0 disk
└─sdd1                8:49   0   12G  0 part
sdb                   8:16   0 15.6G  0 disk
sdc                   8:32   0   12G  0 disk
└─sdc1                8:33   0   12G  0 part
sda                   8:0    0 36.5G  0 disk
├─sda2                8:2    0   36G  0 part
│ ├─vg_main-lv_swap 252:1    0    4G  0 lvm  [SWAP]
│ └─vg_main-lv_root 252:0    0   32G  0 lvm  /
└─sda1                8:1    0  500M  0 part /boot

配置软件

使用GI用户grid启动配置界面:

$ cd $ORACLE_HOME
$ ./gridSetup.sh 

第一次仅设置软件(Set Up Software Only)

只有一个节点,不配置RAC。直接下一步:

配置ASM操作系统组:

ORACLE_HOME在安装时就确定了,这一步确定的是ORACLE_BASE:

root脚本执行配置:

先决条件检查,忽略警告:

安装前回顾:

开始安装:

执行root脚本:

# /u01/app/19.0.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster or Grid Infrastructure for a Stand-Alone Server execute the following command as grid user:
/u01/app/19.0.0/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

配置完成:

配置存储持久化

ASM设备必须设置存储持久化,就是保证设备名,设备的权限在重启后不变化。有3种方式,udev,ASMLIB和ASM Filter Driver (Oracle ASMFD)。
ASMFD是最新的,不是每一个操作系统都支持,我们另文介绍。ASMLIB过时了,不再讨论。udev是操作系统自带的,好处是不需要额外安装驱动。此处使用udev,参考这篇文章,写得非常全面,感谢作者。
首先获取磁盘的SCSI ID,这是唯一不变的信息:

# /usr/lib/udev/scsi_id -g -u -d /dev/sdc
1ATA_VBOX_HARDDISK_VBb483d0cb-d9040f2a
# /usr/lib/udev/scsi_id -g -u -d /dev/sdd
1ATA_VBOX_HARDDISK_VBbc3d0f6f-2c4d4511

使SCSI磁盘成为可信设备:

# echo "options=-g" >> /etc/scsi_id.config

添加ASM磁盘规则:
其中的RESULT就是SCSI ID,磁盘的属主为grid:dba,mode是660,因为oracle用户也属于dba组,因此oracle可以访问此设备,即可以在其上安装数据库:

# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb483d0cb-d9040f2a", SYMLINK+="asm-disk1", OWNER="grid", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBbc3d0f6f-2c4d4511", SYMLINK+="asm-disk2", OWNER="grid", GROUP="dba", MODE="0660"

通知系统设备变更:

# /sbin/partprobe /dev/sdc1
# /sbin/partprobe /dev/sdd1

以下的脚本可以批量的生成这些命令:

i=1
for disk in c d e f; do
        scsiid=$(/usr/lib/udev/scsi_id -g -u -d /dev/sd$disk)
        echo 'KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="'${scsiid}'", SYMLINK+="asm-disk'${i}'", OWNER="grid", GROUP="dba", MODE="0660"'
        ((i=i+1))
done

for disk in c d e f; do
        echo /sbin/partprobe /dev/sd${disk}1
done

在这一步,设备权限和属主已经发生变更:

# ls -l /dev/sd?1
brw-rw----. 1 root disk 8,  1 Sep  8 10:28 /dev/sda1
brw-rw----. 1 grid dba  8, 33 Sep  8 11:31 /dev/sdc1
brw-rw----. 1 grid dba  8, 49 Sep  8 11:31 /dev/sdd1
# ls -l /dev/asm*
lrwxrwxrwx. 1 root root 4 Sep  8 11:31 /dev/asm-disk1 -> sdc1
lrwxrwxrwx. 1 root root 4 Sep  8 11:31 /dev/asm-disk2 -> sdd1

测试udev设置:

# udevadm test /block/sdc/sdc1
# udevadm test /block/sdd/sdd1

如果没有block目录,可以看一下其它目录,如udevadm test /sys/dev/block/8:17

示例输出如下:

# udevadm test /block/sdc/sdc1
calling: test
version 219
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.

=== trie on-disk ===
tool version:          219
file size:         8201136 bytes
header size             80 bytes
strings            2142216 bytes
nodes              6058840 bytes
Load module index
Network interface NamePolicy= disabled on kernel command line, ignoring.
Created link configuration context.
timestamp of '/etc/udev/rules.d' changed
Skipping overridden file: /usr/lib/udev/rules.d/80-net-name-slot.rules.
Reading rules file: /usr/lib/udev/rules.d/10-dm.rules
Reading rules file: /usr/lib/udev/rules.d/100-balloon.rules
Reading rules file: /usr/lib/udev/rules.d/11-dm-lvm.rules
Reading rules file: /usr/lib/udev/rules.d/13-dm-disk.rules
Reading rules file: /usr/lib/udev/rules.d/40-redhat-disable-dell-ir-camera.rules
Reading rules file: /usr/lib/udev/rules.d/40-redhat-disable-lenovo-ir-camera.rules
Reading rules file: /usr/lib/udev/rules.d/40-redhat.rules
Reading rules file: /usr/lib/udev/rules.d/42-usb-hid-pm.rules
Reading rules file: /usr/lib/udev/rules.d/50-udev-default.rules
Reading rules file: /usr/lib/udev/rules.d/59-fc-wwpn-id.rules
invalid key/value pair in file /usr/lib/udev/rules.d/59-fc-wwpn-id.rules on line 10, starting at character 26 (';')
invalid key/value pair in file /usr/lib/udev/rules.d/59-fc-wwpn-id.rules on line 11, starting at character 29 (';')
invalid key/value pair in file /usr/lib/udev/rules.d/59-fc-wwpn-id.rules on line 12, starting at character 25 (';')
Reading rules file: /usr/lib/udev/rules.d/60-alias-kmsg.rules
Reading rules file: /usr/lib/udev/rules.d/60-cdrom_id.rules
Reading rules file: /usr/lib/udev/rules.d/60-drm.rules
Reading rules file: /usr/lib/udev/rules.d/60-evdev.rules
Reading rules file: /usr/lib/udev/rules.d/60-keyboard.rules
Reading rules file: /usr/lib/udev/rules.d/60-net.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-alsa.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-input.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-serial.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage-tape.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-v4l.rules
Reading rules file: /usr/lib/udev/rules.d/60-raw.rules
Reading rules file: /etc/udev/rules.d/60-vboxadd.rules
Reading rules file: /usr/lib/udev/rules.d/61-accelerometer.rules
Reading rules file: /usr/lib/udev/rules.d/64-btrfs-dm.rules
Reading rules file: /usr/lib/udev/rules.d/64-btrfs.rules
Reading rules file: /usr/lib/udev/rules.d/69-dm-lvm-metad.rules
Reading rules file: /usr/lib/udev/rules.d/70-mouse.rules
Reading rules file: /usr/lib/udev/rules.d/70-power-switch.rules
Reading rules file: /usr/lib/udev/rules.d/70-touchpad.rules
Reading rules file: /usr/lib/udev/rules.d/70-uaccess.rules
Reading rules file: /usr/lib/udev/rules.d/71-seat.rules
Reading rules file: /usr/lib/udev/rules.d/73-idrac.rules
Reading rules file: /usr/lib/udev/rules.d/73-seat-late.rules
Reading rules file: /usr/lib/udev/rules.d/75-net-description.rules
Reading rules file: /usr/lib/udev/rules.d/75-probe_mtd.rules
Reading rules file: /usr/lib/udev/rules.d/75-tty-description.rules
Reading rules file: /usr/lib/udev/rules.d/76-phys-port-name.rules
Reading rules file: /usr/lib/udev/rules.d/78-sound-card.rules
Reading rules file: /usr/lib/udev/rules.d/80-drivers.rules
Skipping empty file: /etc/udev/rules.d/80-net-name-slot.rules
Reading rules file: /usr/lib/udev/rules.d/80-net-setup-link.rules
Reading rules file: /usr/lib/udev/rules.d/81-kvm-rhel.rules
Reading rules file: /usr/lib/udev/rules.d/90-vconsole.rules
Reading rules file: /usr/lib/udev/rules.d/95-dm-notify.rules
Reading rules file: /usr/lib/udev/rules.d/95-udev-late.rules
Reading rules file: /etc/udev/rules.d/99-oracle-asmdevices.rules
Reading rules file: /usr/lib/udev/rules.d/99-qemu-guest-agent.rules
Reading rules file: /usr/lib/udev/rules.d/99-systemd.rules
rules contain 24576 bytes tokens (2048 * 12 bytes), 13540 bytes strings
2016 strings (25472 bytes), 1350 de-duplicated (12599 bytes), 667 trie nodes used
GROUP 6 /usr/lib/udev/rules.d/50-udev-default.rules:52
LINK 'disk/by-path/fc---lun-0-part1' /usr/lib/udev/rules.d/59-fc-wwpn-id.rules:15
LINK 'disk/by-id/ata-VBOX_HARDDISK_VBb483d0cb-d9040f2a-part1' /usr/lib/udev/rules.d/60-persistent-storage.rules:56
LINK 'disk/by-path/pci-0000:00:0d.0-ata-3.0-part1' /usr/lib/udev/rules.d/60-persistent-storage.rules:71
IMPORT builtin 'blkid' /usr/lib/udev/rules.d/60-persistent-storage.rules:89
probe /dev/sdc1 raid offset=0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdc' /etc/udev/rules.d/99-oracle-asmdevices.rules:1
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdc'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdc'(out) '1ATA_VBOX_HARDDISK_VBb483d0cb-d9040f2a'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdc' [20936] exit with return code 0
OWNER 54322 /etc/udev/rules.d/99-oracle-asmdevices.rules:1
GROUP 54322 /etc/udev/rules.d/99-oracle-asmdevices.rules:1
MODE 0660 /etc/udev/rules.d/99-oracle-asmdevices.rules:1
LINK 'asm-disk1' /etc/udev/rules.d/99-oracle-asmdevices.rules:1
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdc' /etc/udev/rules.d/99-oracle-asmdevices.rules:2
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdc'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdc'(out) '1ATA_VBOX_HARDDISK_VBb483d0cb-d9040f2a'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdc' [20937] exit with return code 0
handling device node '/dev/sdc1', devnum=b8:33, mode=0660, uid=54322, gid=54322
preserve permissions /dev/sdc1, 060660, uid=54322, gid=54322
preserve already existing symlink '/dev/block/8:33' to '../sdc1'
found 'b8:33' claiming '/run/udev/links/\x2fasm-disk1'
creating link '/dev/asm-disk1' to '/dev/sdc1'
preserve already existing symlink '/dev/asm-disk1' to 'sdc1'
found 'b8:33' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fata-VBOX_HARDDISK_VBb483d0cb-d9040f2a-part1'
creating link '/dev/disk/by-id/ata-VBOX_HARDDISK_VBb483d0cb-d9040f2a-part1' to '/dev/sdc1'
preserve already existing symlink '/dev/disk/by-id/ata-VBOX_HARDDISK_VBb483d0cb-d9040f2a-part1' to '../../sdc1'
found 'b8:49' claiming '/run/udev/links/\x2fdisk\x2fby-path\x2ffc---lun-0-part1'
found 'b8:33' claiming '/run/udev/links/\x2fdisk\x2fby-path\x2ffc---lun-0-part1'
found 'b8:1' claiming '/run/udev/links/\x2fdisk\x2fby-path\x2ffc---lun-0-part1'
creating link '/dev/disk/by-path/fc---lun-0-part1' to '/dev/sdc1'
atomically replace '/dev/disk/by-path/fc---lun-0-part1'
found 'b8:33' claiming '/run/udev/links/\x2fdisk\x2fby-path\x2fpci-0000:00:0d.0-ata-3.0-part1'
creating link '/dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1' to '/dev/sdc1'
preserve already existing symlink '/dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1' to '../../sdc1'
created db file '/run/udev/data/b8:33' for '/block/sdc/sdc1'
.ID_FS_TYPE_NEW=
ACTION=add
DEVLINKS=/dev/asm-disk1 /dev/disk/by-id/ata-VBOX_HARDDISK_VBb483d0cb-d9040f2a-part1 /dev/disk/by-path/fc---lun-0-part1 /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1
DEVNAME=/dev/sdc1
DEVPATH=/block/sdc/sdc1
DEVTYPE=partition
FC_TARGET_LUN=0
ID_ATA=1
ID_ATA_FEATURE_SET_PM=1
ID_ATA_FEATURE_SET_PM_ENABLED=1
ID_ATA_SATA=1
ID_ATA_SATA_SIGNAL_RATE_GEN2=1
ID_ATA_WRITE_CACHE=1
ID_ATA_WRITE_CACHE_ENABLED=1
ID_BUS=ata
ID_FS_TYPE=
ID_MODEL=VBOX_HARDDISK
ID_MODEL_ENC=VBOX\x20HARDDISK\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_PART_ENTRY_DISK=8:32
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=25163776
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:0d.0-ata-3.0
ID_PATH_TAG=pci-0000_00_0d_0-ata-3_0
ID_REVISION=1.0
ID_SERIAL=VBOX_HARDDISK_VBb483d0cb-d9040f2a
ID_SERIAL_SHORT=VBb483d0cb-d9040f2a
ID_TYPE=disk
MAJOR=8
MINOR=33
PARTN=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=2696692
Unload module index
Unloaded link configuration context.

重启UDEV设备:

udevadm control --reload-rules

这样,以后每次重启,这些磁盘的设备名将保持不变,权限也保持不变,即属于grid:dba。

配置GI

使用GI用户(grid),再次启动GI设置程序:

$ cd $ORACLE_HOME
$ ./gridSetup.sh

这一次,选择配置Oracle Restart,即Config Grid Infrastructure for standalone Server(Oracle Restart):

选择之前配置的两块12G磁盘:

设置ASM口令:

设置管理选项:

设置root脚本执行选项:

先决条件检查,选择忽略,(cvuqdisk那个包是RAC用的):

安装前回顾:

开始安装,很快进入执行root脚本阶段:

执行root脚本(此脚本执行时间较长,请耐心等待):

# /u01/app/19.0.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/ol7-vagrant/crsconfig/roothas_2019-09-08_11-50-35AM.log
2019/09/08 11:50:39 CLSRSC-363: User ignored prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node ol7-vagrant successfully pinned.
2019/09/08 11:50:51 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

ol7-vagrant     2019/09/08 11:51:50     /u01/app/grid/crsdata/ol7-vagrant/olr/backup_20190908_115150.olr     724960844
2019/09/08 11:51:51 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

终于看到这个成功的界面了!

安装后

补充环境变量

添加以下到GI用户(grid)的.bash_profile中:

export ORACLE_SID=+ASM

核验

至此,GI配置全部完成,用于Oracle数据库安装的磁盘组也已就绪。通过asmca也可以确认:

GI设置成功后,我们在/etc/oratab中可以看到ASM实例,这时数据库尚未创建:

$ tail /etc/oratab
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM:/u01/app/19.0.0/grid:N             # line added by Agent

sqlplus也可以登录到实例:

[grid@ol7-vagrant ~]$ . oraenv
ORACLE_SID = [+ASM] ?
The Oracle base remains unchanged with value /u01/app/grid
[grid@ol7-vagrant ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 8 12:27:43 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select name,state,type from v$asm_diskgroup;

NAME                           STATE       TYPE
------------------------------ ----------- ------
DATA                           MOUNTED     NORMAL
SQL> show parameter asm_diskstring

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/sd*

重启主机,发现设备的权限没有变化:

$ ls -l /dev/sd?1
brw-rw----. 1 root disk 8,  1 Sep  8 12:36 /dev/sda1
brw-rw----. 1 grid dba  8, 33 Sep  8 12:42 /dev/sdc1
brw-rw----. 1 grid dba  8, 49 Sep  8 12:42 /dev/sdd1

ASM实例也自动启动:

$ ps -ef|grep ASM
grid      4280     1  0 12:37 ?        00:00:00 asm_pmon_+ASM
grid      4282     1  0 12:37 ?        00:00:00 asm_clmn_+ASM
grid      4285     1  0 12:37 ?        00:00:00 asm_psp0_+ASM
grid      4309     1  1 12:37 ?        00:00:05 asm_vktm_+ASM
grid      4313     1  0 12:37 ?        00:00:00 asm_gen0_+ASM
grid      4316     1  0 12:37 ?        00:00:00 asm_mman_+ASM
grid      4320     1  0 12:37 ?        00:00:00 asm_gen1_+ASM
grid      4323     1  0 12:37 ?        00:00:00 asm_diag_+ASM
grid      4325     1  0 12:37 ?        00:00:00 asm_pman_+ASM
grid      4327     1  0 12:37 ?        00:00:00 asm_dia0_+ASM
grid      4329     1  0 12:37 ?        00:00:00 asm_dbw0_+ASM
grid      4331     1  0 12:37 ?        00:00:00 asm_lgwr_+ASM
grid      4333     1  0 12:37 ?        00:00:00 asm_ckpt_+ASM
grid      4335     1  0 12:37 ?        00:00:00 asm_smon_+ASM
grid      4337     1  0 12:37 ?        00:00:00 asm_lreg_+ASM
grid      4339     1  0 12:37 ?        00:00:00 asm_pxmn_+ASM
grid      4341     1  0 12:37 ?        00:00:00 asm_rbal_+ASM
grid      4343     1  0 12:37 ?        00:00:00 asm_gmon_+ASM
grid      4345     1  0 12:37 ?        00:00:00 asm_mmon_+ASM
grid      4347     1  0 12:37 ?        00:00:00 asm_mmnl_+ASM
grid      4376     1  0 12:37 ?        00:00:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid      4974  4289  0 12:42 pts/0    00:00:00 grep --color=auto ASM

asmcmd命令正常:

$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512             512   4096  4194304     24568    24344                0           12172              0             N  DATA/
$ asmcmd lsdsk
Path
/dev/sdc1
/dev/sdd1

各资源均正常:

$ crsctl stat resource -t
[grid@ol7-vagrant ~]$ crsctl stat resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details                                                                                    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       ol7-vagrant              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       ol7-vagrant              STABLE
ora.asm
               ONLINE  ONLINE       ol7-vagrant              Started,STABLE
ora.ons
               OFFLINE OFFLINE      ol7-vagrant              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       ol7-vagrant              STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       ol7-vagrant              STABLE
--------------------------------------------------------------------------------

最重要的一点,监听也正常,这个监听是grid用户创建的,后续建立数据库时也建议用GI的监听:

$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 08-SEP-2019 12:45:50

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-vagrant)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                08-SEP-2019 12:37:09
Uptime                    0 days 0 hr. 8 min. 40 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19.0.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/ol7-vagrant/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-vagrant)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
The command completed successfully

错误

这几天的安装中,频繁碰到这个错误,原因就是磁盘持久化设置不对。其实就是grid用户没有写磁盘设备的权限。如果这些磁盘属于root,那么可以肯定磁盘持久化没有设置成功。

INFO:  [Sep 7, 2019 3:16:35 PM] ORA-15018: diskgroup cannot be created
INFO:  [Sep 7, 2019 3:16:35 PM] Skipping line: ORA-15018: diskgroup cannot be created
INFO:  [Sep 7, 2019 3:16:35 PM] ORA-15031: disk specification '/dev/sdd' matches no disks
INFO:  [Sep 7, 2019 3:16:35 PM] Skipping line: ORA-15031: disk specification '/dev/sdd' matches no disks
INFO:  [Sep 7, 2019 3:16:35 PM] ORA-15025: could not open disk "/dev/sdd"
INFO:  [Sep 7, 2019 3:16:35 PM] Skipping line: ORA-15025: could not open disk "/dev/sdd"
INFO:  [Sep 7, 2019 3:16:35 PM] ORA-27041: unable to open file
INFO:  [Sep 7, 2019 3:16:35 PM] Skipping line: ORA-27041: unable to open file
INFO:  [Sep 7, 2019 3:16:35 PM] Skipping line:
INFO:  [Sep 7, 2019 3:16:35 PM] Skipping line:
INFO:  [Sep 7, 2019 3:16:35 PM] Skipping line:
INFO:  [Sep 7, 2019 3:16:35 PM] Completed Plugin named: asmca

参考

  1. UDEV SCSI Rules Configuration In Oracle Linux 5, 6 and 7
  2. Grid Infrastructure Installation and Upgrade Guide (19c for Linux)

本文标签: ORACLEInfrastructuregrid