In the first part of the article series you could read how a kickstart file made the installation of Oracle Linux 7 a lot more bearable. In this part of the series it’s all about configuring the operating system. The installation of Grid Infrastructure and the Oracle database is for another set of posts.
There are quite some differences between Oracle Linux 6 and 7
To me the transition from Oracle Linux 6 to 7 feels like the step from Solaris 9 to 10 at the time. Personally I think that a lot has changed. Although, it’s fair to say that it has been quite some time it has been announced that the network stack commands we know and love are deprecated and might go… Even with Oracle Linux 6 there was a threat that network manager would now be the only tool to modify your network settings (which thankfully was not the case). A lot of efforts of the Linux community have now come to fruition, and it’s time to adjust to the future. Even when it’s painful (and it is, at least a bit).
Configuring the network
The warning has been out there quite a while but now it seems to be true-no more system-config-network-tui to configure the network! No more ifconfig! Oh dear-quite a bit of learning to be done. Luckily someone else has done all the legwork and documented the changes. A good example is this one:
So first of all-don’t fear: although all network interfaces are configured using network manager now, you can still use a command line tool: nmtui. After trying it out I have to say I’m not really convinced about its usability. What appears better is the use of the nmcli, network manager command line tool. It’s use is quite confusing, and it appears to me as if the whole network manager toolset was developed for laptop users, not servers. But I digress. I have a few interfaces in my RAC VM, the first was configured during the installation, eth[1-3] aren’t configured yet.
[root@localhost ~]# nmcli connection show NAME UUID TYPE DEVICE System eth0 77e3f8a9-76d0-4051-a8f2-cbbe39dab089 802-3-ethernet eth0 [root@localhost ~]# nmcli device status DEVICE TYPE STATE CONNECTION eth0 ethernet connected System eth0 eth1 ethernet disconnected -- eth2 ethernet disconnected -- eth3 ethernet disconnected -- lo loopback unmanaged -- [root@localhost ~]#
At this point I have used eth0 as the management network (similar to the way Exadata does) and will use the other networks for the database. eth1 will act as the public network, eth2 and eth3 will be private.
Although the network interfaces can be named differently for device name persistence I stick with the old naming for now. I don’t want to run into trouble with the installer just yet. On physical hardware you are very likely to see very different network interface names, the kernel uses a naming scheme identifying where the cards are (on the main board, or in extension cards for example). I’ll write another post about that soon.
Using dnsmasq (on the host) I configure my hosts for these addresses:
[root@ol62 ~]# grep rac12pri /etc/hosts 192.168.100.107 rac12pri1.example.com rac12pri1 192.168.100.108 rac12pri1-vip.example.com rac12pri1-vip 192.168.100.109 rac12pri2.example.com rac12pri2 192.168.100.110 rac12pri2-vip.example.com rac12pri2-vip 192.168.100.111 rac12pri-scan.example.com rac12pri-scan 192.168.100.112 rac12pri-scan.example.com rac12pri-scan 192.168.100.113 rac12pri-scan.example.com rac12pri-scan
Configuring the interface is actually not too hard once you got the hang of it. It took me a little while to get it though… It almost appears as if something that was simple and easy to use was made difficult to use.
[root@localhost ~]# nmcli con add con-name eth1 ifname eth1 type ethernet ip4 192.168.100.107/24 gw4 192.168.100.1 [root@localhost ~]# nmcli con add con-name eth2 ifname eth2 type ethernet ip4 192.168.101.107/24 [root@localhost ~]# nmcli con add con-name eth3 ifname eth3 type ethernet ip4 192.168.102.107/24 [root@localhost ~]# nmcli con show NAME UUID TYPE DEVICE eth2 ccc7f592-b563-4b9d-a36b-2b45809e4643 802-3-ethernet eth2 eth1 ae897dee-42ff-4ccd-843b-7c97ba0d5315 802-3-ethernet eth1 System eth0 77e3f8a9-76d0-4051-a8f2-cbbe39dab089 802-3-ethernet eth0 eth3 b6074c9a-dcc4-4487-9a8a-052e4c60bbca 802-3-ethernet eth3
I can now verify the IP addresses using the “ip” tool (ifconfig was not installed, I haven’t yet checked if there was a compatibility package though)
[root@localhost ~]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:6e:6f:67 brd ff:ff:ff:ff:ff:ff inet 192.168.150.111/24 brd 192.168.150.255 scope global eth0 inet6 fe80::5054:ff:fe6e:6f67/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:96:ad:88 brd ff:ff:ff:ff:ff:ff inet 192.168.100.107/24 brd 192.168.100.255 scope global eth1 inet6 fe80::5054:ff:fe96:ad88/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:c1:cc:8e brd ff:ff:ff:ff:ff:ff inet 192.168.101.107/24 brd 192.168.101.255 scope global eth2 inet6 fe80::5054:ff:fec1:cc8e/64 scope link valid_lft forever preferred_lft forever 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:7e:59:45 brd ff:ff:ff:ff:ff:ff inet 192.168.102.107/24 brd 192.168.102.255 scope global eth3 inet6 fe80::5054:ff:fe7e:5945/64 scope link valid_lft forever preferred_lft forever
Now what’s left is setting the hostname-which is a simple call to hostnamectl –static set-hostname rac12pri1. nmcli gives you an interface to changing the hostname as well. I repeated the steps for node 2, they are identical except for the network IP addresses of course.
So that concludes the network setup.
Managing linux daemons
If you are curious about setting services at runlevel, then there’ll be another surprise:
[root@rac12pri2 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. iprdump 0:off 1:off 2:on 3:on 4:on 5:on 6:off iprinit 0:off 1:off 2:on 3:on 4:on 5:on 6:off iprupdate 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off pmcd 0:off 1:off 2:off 3:off 4:off 5:off 6:off pmie 0:off 1:off 2:off 3:off 4:off 5:off 6:off pmlogger 0:off 1:off 2:off 3:off 4:off 5:off 6:off pmmgr 0:off 1:off 2:off 3:off 4:off 5:off 6:off pmproxy 0:off 1:off 2:off 3:off 4:off 5:off 6:off pmwebd 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@rac12pri2 ~]#
If you just got familiar with upstart then there are some bad news: upstart is now replaced with systemd… This might be the right time to read up on that if you aren’t familiar with it yet:
Things are a little different with that, so here is an example how to enable and start the NTP service. It has to be installed first if that hasn’t been the case. You also should add the -x flag in /etc/sysconfig/ntpd. First I would like to see if the service is available. You use systemctl for this-so instead of a chkconfig ntpd –list you call systemctl as shown:
[root@rac12pri ~]# systemctl list-units --type service --all | grep ntpd ntpd.service loaded inactive dead Network Time Service ntpdate.service loaded inactive dead Set time via NTP
I have to get used to the new syntax: previously you used “service <whatever> status” and then, if you needed, typed backspace a few times and changed status to start. The new syntax is closer to human language but less practical: systemctl status <service>. Changing status to start requires more typing.
The check proved that the service exists (i.e. the NTP package is installed), but it is not started. We can change this:
[root@rac12pri ~]# systemctl enable ntpd.service [root@rac12pri ~]# systemctl start ntpd.service [root@rac12pri ~]# systemctl status ntpd.service ntpd.service - Network Time Service Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled) Active: active (running) since Tue 2014-12-16 15:38:47 GMT; 1s ago Process: 5179 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 5180 (ntpd) CGroup: /system.slice/ntpd.service └─5180 /usr/sbin/ntpd -u ntp:ntp -g -x Dec 16 15:38:47 rac12pri ntpd: Listen normally on 8 eth1 fe80::5054:ff:fe96:ad88 UDP 123 Dec 16 15:38:47 rac12pri ntpd: Listen normally on 9 eth2 fe80::5054:ff:fec1:cc8e UDP 123 Dec 16 15:38:47 rac12pri ntpd: Listen normally on 10 eth3 fe80::5054:ff:fe7e:5945 UDP 123 Dec 16 15:38:47 rac12pri ntpd: Listen normally on 11 eth0 fe80::5054:ff:fe6e:6f67 UDP 123 Dec 16 15:38:47 rac12pri ntpd: Listening on routing socket on fd #28 for interface updates Dec 16 15:38:47 rac12pri ntpd: 0.0.0.0 c016 06 restart Dec 16 15:38:47 rac12pri ntpd: 0.0.0.0 c012 02 freq_set ntpd 0.000 PPM Dec 16 15:38:47 rac12pri ntpd: 0.0.0.0 c011 01 freq_not_set Dec 16 15:38:47 rac12pri systemd: Started Network Time Service. Dec 16 15:38:48 rac12pri ntpd: 0.0.0.0 c614 04 freq_mode [root@rac12pri ~]#
The call to “systemctl enable” replaces an invocation of chkconfig to automatically start ntpd as a service (chkconfig ntpd on). Starting the service does not produce any output, hence the need to check the status.
There is a slight caveat with the use of NTP: it is not the default time keeping service. Another tool, named chronyd is used instead.
This causes a problem after the next reboot: chronyd will be started, NTPd won’t be. The Red Hat documentation therefore has a section on how to switch:
[root@rac12pri ~]# systemctl stop chronyd [root@rac12pri ~]# systemctl disable chronyd [root@rac12pri ~]# systemctl status chronyd
Shared storage is provided by KVM. I am using my SSDs in the lab from where I create a few “LUNs”. These must explicitly be made “shareable” to be accessible by more than one guest. Since 220.127.116.11.0 Oracle installs a database for the cluster health monitor by default. Currently I use the following setup for my lab 18.104.22.168 clusters:
- +CHM (external redundancy) – 1x 15GB
- +OCR (normal redundancy) – 3x 2 GB
- +DATA (external redundancy) – 1 x 15GB
- +RECO (external redundancy) – 1 x 10 GB
If you use the guided installation of Grid Infrastructure the installer will prompt you for a single disk group only. This means that the CHM database as well as the OCR and voting files be installed in that disk group. I prefer to separate them though, which is why I create a second disk group OCR after the installation has completed and move the voting files and OCR out of +CHM.
DATA and RECO are standard Exadata disk groups and I like to keep things consistent for myself.
I use fdisk to partition the future ASM disks with 1 partition spanning the whole LUN.
A lot of the other pre-installation tasks can actually be performed during the kickstart installation. I still like to use SELinux in permissive mode even though-according to Requirements for Installing Oracle Database 12.1 on RHEL6 or OL6 64-bit (x86-64) (Doc ID 1529864.1)-selinux can be in “enforcing”. The directive in the kickstart file is
You shouldn’t have to install additional packages-all packages to be installed should go into the %packages section of the file. Simply copy the package names from the official documentation and paste below the last package in the section. There is one exception to the rule: cvuqdisk must be installed from the Oracle installation media.
Settings for /etc/sysctl.conf and /etc/security/limits.conf can also be made in the kickstart file as shown in the first part of this series.
Storage to be made available to RAC must have permissions set. Since there isn’t an ASMLib in Oracle Linux 7 to my knowledge UDEV will have to be used, and my udev configuration file, too, is in the first part.
To make sure my user and group IDs for the oracle and grid account are the same I create the accounts in the kickstart file as well. Passwords are deliberately not set-they may evolve and I can’t possibly remember them all :)
User equivalence can be set up using a technique I have already described in an earlier blog post. Although the user equivalence setup can be deferred to when you install Grid Infrastructure I still perform it before to allow me to run the cluster verification tool with the -fixup option.