I promised in the introduction to introduce my lab environment in the first part of the series. So here we go…
Similar to the Fedora project, SuSE (now Novell) have come up with a community distribution some time ago which can be freely downloaded from the Internet. All these community editions give the users a glimpse at the new and upcoming Enterprise distribution, such as RHEL or SLES.
I have chosen the OpenSuSE 12.2 distribution for the host operating system. It has been updated to xen 3.4.1, kernel 188.8.131.52 and libvirt 0.7.2. These packages provide a stable execution environment of the virtual machines we are going to build. Alternative xen-based solutions have not been considered. During initial testing I found that Oracle VM 2.1.x virtual machines could not mount iSCSI targets without kernel-panicking and crashing. Citrix’s xenserver is too commercial, and the community edition is lacking needed features, and finally Virtual Iron had already been purchased by Oracle.
All kernel 2.6.18-x based distributions such as Red Hat 5.x and clones were discarded for lack of features and their age. After all, 2.6.18 has been introduced three years ago and although features were back-ported to it, xen support is way behind what I needed. The final argument in favour of OpenSuSE was the fact that SuSE provide a xen-capable 2.6.31 kernel out of the box. Although it is perfectly possibly to build one’s own xen-kernel, this is an advanced topic and not covered here. OpenSuSE also makes configuring the networking bridges very straight forward by a good integration into yast, the distributions setup and configuration tool.
The host system uses the following components:
- Single Intel Core i7 processor
- 24GB RAM
- 1.5 TB hard disk space in RAID 1
The whole configuration can be rented from hosting providers, something I have chosen to do. The host has run a four node 11.2 cluster plus 2 additional virtual machines for Enterprise Manager Grid Control 11.1 without problems. To my experience the huge amount of memory is the greatest benefit of the above configuration. Allocating four GB of RAM to each VM helped a lot.
You should be roughly familiar with the concepts behind XEN virtualisation, the following list explains the most important terminology.
- Hypervisor The enabling technology to run virtual machines. The hypervisor used in this document is the xen hypervisor.
- dom0 The dom(ain) 0 is the name for the host. The dom0 has full access to all the system’s peripherals
- domU In Xen parlance, the domU is a virtual machine. Xen differentiates between paravirtualised and fully virtualised machines. Paravirtualisation broadly speaking offers superior performance, but requires a modified operating system. I am going to use paravirtualised domUs
- Bridge: A (virtual) network device used for IP communication between virtual machines and the
Start off by installing the openSuSE 11.2 distribution, either choosing the GNOME or KDE desktop. Long years of exposure to Red Hat based systems made me chose the GNOME desktop. Once the installation has completed, start the yast administration tool and click on the “install hypervisor and tools” button. This will install the xen-aware kernel and add the necessary entry to GRUB boot loader. Once completed, reboot the server and boot the xen kernel. You don’t need to configure any network bridges at this stage, even though yast prompts you to do so.
Networking on the dom0
RAC requires at least 2 NICs per cluster nodes with fibre channel connectivity. In our example I am going to use iSCSI targets for storage, provided by the OpenFiler community edition. It is good practice to separate storage communication from any other communication, the same as with the cluster interconnect. Therefore, a third bridge will be used. Production setups would of course use a different setup, but as iSCSI serves the purpose quite well I decided to implement it. Also, a production cluster would feature redundancy everywhere, including NICs and HBAs. Remember that redundancy can prevent outages!
The communication between the cluster nodes will be channeled over virtual switches, so called bridges. It used to be quite difficult to set up a network bridge for XEN, but openSuSE’s yast configuration
tool makes this quite simple. My host has the following bridges configured:
- br0 This is the only bridge that has a physical interface bridged, normally eth0 or bond0. It won’t be used for the cluster and is used purely to allow my ssh traffic coming in. If not yet configured,
- br1 I used br1 as a host only network for the public cluster communication. It does not have a bridged physical interface
- br2 This is in use for the private cluster interconnect. This bridge doesn’t have a physical NIC configured
- br3 Finally this bridge will be used to allow iSCSI communication between the filers and the cluster nodes. Neither does this have a physical NIC configured
I said a number of times that configuring a bridge was quite tedious, and for some other distributions it still is. It requires quite a bit of knowledge of the bridge-utils package and the naming conventions for virtual and physical network interfaces in XEN. To configure a bridge in OpenSuSE, start yast, and click on the “Network Settings” icon to start the network configuration.
The configuration tool will load the current network configuration. Bridge br0 should be configured to bridge the public interface name, usually eth0. All other bridges should not bridge physical devices, effectively making them host-only. If you haven’t configured a network bridge when you installed the xen hypervisor and tools, it’s time to do so now. Identify your external networking device in the list of devices shown on the “Overview” page. Take note of all settings such as IP address, netmask, gateway, MTU, routes, etc. You can get this information by selecting your external NIC and clicking on the “Edit” button.
You should see a Network Bridge entry in the list of interfaces, which probably uses DHCP. Select it and click on “Edit”. Enter all the details you just copied from your actual physical NIC and ensure that under tab “Bridged Devices” that interface is listed. Click on “Next”. Confirm the warning that a device is already configured with these settings. This will effectively deconfigure the physical device and replace it with the bridge.
Adding the host-only bridges is easier. Select the “Add” option next, and on the following screen ensure to have selected “Bridge” as the device type. The configuration name will be set correctly, don’t change it unless you know what you are doing. In the following Network Card Setup screen, assign a static IP address, a subnet, and optionally ahostname. I left the hostname blank for all but the public bridge br0.
Finish the configuration assistant. Before restarting the network ensure you have an alternative means of getting to your machine, for example using a console. If the network is badly configured, you might be locked out.
The network setup
The following IP addresses are used for the example cluster:
|IP Address Range||Used For|
|192.168.99.50-52||The IP addresses for the web interfaces of the openFiler iSCSI “SAN”|
|192.168.99.53-55||The Single Client Access Name for the cluster|
|192.168.99.56-59||Node virtual IP addresses|
|192.168.100.56 and 58||Private cluster interconnect|
|192.168.101.56 and 58||Storage subnet for the cluster nodes|
|192.168.101.50-52||The IP addresses for the iSCSI interfaces of the openFiler “SAN”|
Some of these addresses need to go into DNS. Edit your DNS server’s zone files and include the following to the zone’s forward lookup file:
; extended distance cluster filer01 IN A 192.168.99.50 filer02 IN A 192.168.99.51 filer03 IN A 192.168.99.52 edc-scan IN A 192.168.99.53 edc-scan IN A 192.168.99.54 edc-scan IN A 192.168.99.55 edcnode1 IN A 192.168.99.56 edcnode1-vip IN A 192.168.99.57 edcnode2 IN A 192.168.99.58 edcnode2-vip IN A 192.168.99.59
The reverse lookup looks as follows:
; extended distance cluster 50 IN PTR filer01.localdomain. 51 IN PTR filer02.localdomain. 52 IN PTR filer03.localdomain. 53 IN PTR edc-scan.localdomain. 54 IN PTR edc-scan.localdomain. 55 IN PTR edc-scan.localdomain. 56 IN PTR edcnode1.localdomain. 57 IN PTR edcnode1-vip.localdomain. 58 IN PTR edcnode2.localdomain. 59 IN PTR edcnode2-vip.localdomain.
The public network maps to bridge br1 on network 192.168.99/24, the private network is supported through br2 in the 192.168.100/24, and the storage will go through br3, using the 192.168.101/24 subnet.
Reload the DNS service now to make these changes active.
You should use the “host” utility to check if the SCAN resolves in DNS to be sure it all works.
That’s it-you successfully set up the dom0 for working with the virtual machines. Continue with the next part of the series, which is going to introduce openFiler and how to install it as a domU with minimal effort.