Martins Blog

Trying to explain complex things in simple terms

Getting started with iSCSI part II-Presenting Storage

Posted by Martin Bach on October 21, 2009

This part of the series focuses on how to add storage to your appliance and then make that available to your clients. I have focussed on providing quick success, and this setup is mainly for a lab to experiment with as it’s not secure enough for a production roll out. This part of the article series is part of a presentation I gave at UKOUG’s UNIX SIG in September.

Add sharable storage

So first of all, you need to present more storage to your Openfiler system. In my case that’s quite simple, I am running Openfiler as a domU under OpenSuSE 11.1, but any recent Xen implementation should do the trick. I couldn’t convince Oracle VM 2.1.5 to log in to an iSCSI target without crashing the whole domU (RHEL 5.2 32bit) which prompted me to ditch that product. I hear that Oracle VM 2.2 focuses on the management interface as a consequence of the virtual iron acquisition earlier this year. Rumour has it that the underlying Xen version won’t change (much). To be seen-Oracle VM 2.2 hasn’t been publicly released yet. But back to Openfiler and the addition of storage.

First of all I add a LV on the dom0:

# lvcreate --name openfiler_data_001 --size 15G data_vg

Next I add that to the appliance:

# xm block-attach 5 'phy:/dev/data_vg/openfiler_data_001' xvdd w

This command adds the new LV to the domain 5-openfiler-as /dev/xvdd in write mode. Openfiler (and in fact any modern Linux-based domU) will detect the new storage straight away and make it available for use.

Administer storage through the GUI

For the next sections I’ll ask you to navigate around in the GUI. One thing to remember here is that most navigation starts with the horizontal tab on the top of the screen (status, system, volumes, …) and then additional options appear on the right hand side of the screen.

Switch to the GUI now and click on Volumes-Block Devices. You are presented the list of block devices known to the appliance. Click on /dev/xvdd, which is the new block device. Create one partition spanning the whole disk as physical volume (not software RAID).

Next, click on Volume groups and create a new volume group using /dev/xvdd1, call it rac_vg. So far so good-next up we partition the volume group into LVs. Click on add volume, select rac_vg and finally change to change the display. You will now need to create 6 logical volumes: 5 x 500M volumes for OCR and voting disk plus their mirrors. The last volume is for the ASM disk group.; if you have the space, you could also create a second disk group for the FRA. The volume names should be as described below if you’d like to follow the tutorial:

  • ocr_a
  • ocr_b
  • vote_a
  • vote_b
  • vote_c
  • asm_001

For each of them, ensure that the filesystem type is iscsi and not xfs.

For each of the volumes we just created, we need to create iSCSI targets-which is not really exciting but needs to be done nevertheless. So click on iSCSI targets (we _are_ finally getting closer!) next and then add 6 new iSCSI targets. I created the following:

  • iqn.2009-10.com.openfiler:rac_vg.ocr_a
  • iqn.2009-10.com.openfiler:rac_vg.ocr_b
  • iqn.2009-10.com.openfiler:rac_vg.vote_a
  • iqn.2009-10.com.openfiler:rac_vg.vote_b
  • iqn.2009-10.com.openfiler:rac_vg.vote_c
  • iqn.2009-10.com.openfiler:rac_vg.asm_001

The names can be arbitrary, but I prefer the vg_name.lv_name syntax.

Now another administrative step is needed: the addition of the RAC nodes to the ACL maintained by the appliance, allowing the host to discover the storage. To accomplish this, change to the system tab and scroll down for network configuration. Add the hosts where the storage is going to be presented to, preferably with their numeric IP addresses. In my case that’s 192.168.1.90/255.255.255.255 (yes, FF.FF.FF.FF), and 192.168.1.91/255.255.255.255. Set the type to share.

Once all these targets are created, we need to map the LUNs (i.e. logical volumes created previously) to the targets. Select the first target iqn.2009-10.com.openfiler:rac_vg.ocr_a from the list of targets (a click on the button change activates it) and then chose the lun mapping. You are now presented a list of potential targets to map against the LUN. For our ocr_a target we chose the ocr_a LV, surprise, surprise. Once the LUN is mapped, and I recommend write-through and blockio for r/w mode and transfer mode here, click on network ACL. You need to grant access explicitly for each of the RAC nodes. A click to update makes the changes permanent.

Now repeat this 6 times :) , carefully mapping the target to the respective LUN. No target should have more than 1 LUN mapped.

This probably took 5-10 minutes, and it wasn’t the thrill of a lifetime I have to admit. By the way, all these steps just undertaken can easily be carried out from the command line, but like I said in the introduction to part I, you need to know a lot more about what you are doing. Which isn’t a bad thing actually, the GUI gives you a fast tracked path to success. I can only encourage everybody to read up and get some background information about these topics.

Summary

We successfully added storage to the appliance which has been turned into a volume group, subdivided into multiple logical volumes. Each of these volumes has been transformed into an iSCSI target, including access permissions to the RAC nodes. What remains to be done is the presentation of the storage to the cluster, which I’ll cover in part 3 of this blog.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: