Finally I have some more time to work on the next article in this series, dealing with the setup of my two cluster nodes. This is actually going to be quite short compared to the other articles so far. This is mainly due to the fact that I have streamlined the deployment of new Oracle-capable machines to a degree where I can comfortably set up a cluster in 2 hours. It’s a bit more work initially, but it paid off. The setup of my reference VM is documented on this blog as well, search for virtualisation and opensuse to get to the article.
When I first started working in my lab environment I created a virtual machine called “rhel55ref”. In reality it’s OEL, because of Red Hat’s windooze like policy to require an activation code. I would have considered CentOS as well, but when I created the reference VM the community hadn’t provided the “update 5”. I like the brand new shiny things most :)
Seems like I’m lucky now as well with the introduction of Oracle’s own Linux kernel I am ready for the future. Hopefully Red Hat will get their act together soon and release version 6 of their distribution. As much as I like Oracle I don’t want them to dominate the OS market too much. With Solaris now in their hands as well…
Anyway, to get started with my first node I cloned my template. Moving to /var/lib/xen/images all I had to do was to “cp -a rhel55ref edcnode1”. One repetition to edcnode2 gave me my second node. Xen (or libvirt for that matter) stores the VM configuration in xenstore, a backend database which can be interrogated easily. So I dumped the XML configuration file for my rhel55ref VM and stored it in edcnode{1,2}.xml. The command to dump the information is “virsh dumpxml domainName” > edcnode{1,2}.xml
The domU folder contains the virtual disk for the root file system of my VM, called disk0. I then created a new “hard disk”, called disk1 to contain the Oracle binaries. Experience told me not to have that too small, 20G should be enough for my /u01 mountpoint for Grid Infrastructure and the RDBMS binaries.
[root@dom0]# /var/lib/xen/images/edcnode1 # dd if=/dev/zero of=disk01 bs=1 count=0 seek=20G 0+0 records in 0+0 records out 0 bytes (0 B) copied, 1.3869e-05 s, 0.0 kB/
I like to speed the file creation up by using the sparse file trick: the file disk1 will be reported to be 20G in size, but it will only use that if the virtual machine needs them. It’s a bit like Oracle creating a temporary tablespace.
With that information it’s time to modify the dumped XML file. Again it’s important to define MAC addresses for the network interfaces, otherwise the system will try and use dhcp for your NICs, destroying the carefully crafted /etc/sysconfig/network-scripts/ifcfg-eth{0,1,2} files. Oh, and remember that the first 3 tupel are reserved for XEN, so don’t change “00:16:3e”! Your UUID also has to be unique. In the end my first VM’s XML description looked like this:
<domain type='xen'> <name>edcnode1</name> <uuid>46a36f98-4e52-45a5-2579-80811b38a3ab</uuid> <memory>4194304</memory> <currentMemory>524288</currentMemory> <vcpu>2</vcpu> <bootloader>/usr/bin/pygrub</bootloader> <bootloader_args>-q</bootloader_args> <os> <type>linux</type> <cmdline> </cmdline> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <disk type='file' device='disk'> <driver name='file'/> <source file='/var/lib/xen/images/edcnode1/disk0'/> <target dev='xvda' bus='xen'/> </disk> <disk type='file' device='disk'> <driver name='file'/> <source file='/var/lib/xen/images/edcnode1/disk1'/> <target dev='xvdb' bus='xen'/> </disk> <interface type='bridge'> <mac address='00:16:3e:ab:cd:ef'/> <source bridge='br1'/> <script path='/etc/xen/scripts/vif-bridge'/> <target dev='vif-1.0'/> </interface> <interface type='bridge'> <mac address='00:16:3e:10:13:1a'/> <source bridge='br2'/> <script path='/etc/xen/scripts/vif-bridge'/> </interface> <interface type='bridge'> <mac address='00:16:3e:11:12:ef'/> <source bridge='br3'/> <script path='/etc/xen/scripts/vif-bridge'/> <target dev='vif-1.0'/> </interface> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='xen'/> </devices> </domain>
You can see that the interfaces refer to br1, br2, and br3. These are the ones that were previously defined in the first article. The tag “<target>” in the <interface> tag doesn’t matter as that will be dynamically assigned anyway.
When done, you can define the new VM and start it:
[root@dom0]# virsh define edcnode{1,2}.xml [root@dom0]# xm start edcnode1 -c
You are directly connected to the VM’s console (80×24-just like in the old times!) and have to wait a looooong time for the DHCP requests for eth0, eth1 and eth2 to time out. This is the first thing to address. As root, log in to the system and navigate straight to /etc/sysconfig/network-scripts to change ifcfg-eth{0,1,2}. Alternatively, use system-config-network-tui to change the network settings.
The following settings should be used for edcnode1:
- eth0: 192.168.99.56/24
- eth1: 192.168.100.56/24
- eth2: 192.168.101.56/24
These are the settings for edcnode2:
- eth0: 192.168.99.58/24
- eth1: 192.168.100.58/24
- eth2: 192.168.101.58/24
The nameserver for both is my dom0 – in this case 192.168.99.10. Enter the appropriate hostname as well as the nameserver. Note that 192.168.99.57 and 59 are reserved for the node VIPs, hence the “gap”. Then edit /etc/hosts to enter the information about the private interconnect, which for obvious reasons is not included in DNS. If you like, persist your public and VIP information in /etc/hosts as well. Don’t do this with the SCAN, it’s not suggested to have the SCAN resolve through /etc/hosts although it works.
Now’s the big moment-restart the network services and get out of the uncomfortable 80×24 character limitation:
[root@edcnode1]# service network restart
The complete configuration is printed here for the sake of completeness for edcnode1:
[root@edcnode1 ~]# cat /etc/resolv.conf nameserver 192.168.99.10 search localdomain [root@edcnode1 ~]# [root@edcnode1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth{0,1,2} # Xen Virtual Ethernet DEVICE=eth0 BOOTPROTO=none ONBOOT=yes HWADDR=00:16:3e:ab:cd:ef NETMASK=255.255.255.0 IPADDR=192.168.99.56 TYPE=Ethernet # Xen Virtual Ethernet DEVICE=eth1 BOOTPROTO=none ONBOOT=yes HWADDR=00:16:3e:10:13:1a NETMASK=255.255.255.0 IPADDR=192.168.100.56 TYPE=Ethernet # Xen Virtual Ethernet DEVICE=eth2 BOOTPROTO=none ONBOOT=yes HWADDR=00:16:3e:11:12:ef NETMASK=255.255.255.0 IPADDR=192.168.101.56 TYPE=Ethernet [root@edcnode1 ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=edcnode1
Next on the agenda is the iscsi-initiator. This isn’t part of my standard build and had to be added. All my software is exported from the dom0 via NFS and mounted to /mnt/
[root@edcnode1 ~]# find /mnt -iname "iscsi*" /mnt/oracleEnterpriseLinux/source/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm [root@edcnode1 ~]# cd /mnt/oracleEnterpriseLinux/source/ [root@edcnode1 ~]# rpm -ihv iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm warning: iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm: ... Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%
It’s important to edit the initiator name, i.e. the name the initiator reports back to OpenFiler. I changed it to include edcnode1 and edcnode2 on their respective hosts. The file to edit is /etc/iscsi/initiatorname.iscsi
Time to get serious now:
[root@edcnode1 ~]# /etc/init.d/iscsi start iscsid is stopped Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: iscsiadm: No records found! [ OK ]
We are ready to roll. First, we need to discover the targets from the OpenFiler appliance-start with the first one filer01:
[root@edcnode1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.101.50 192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm01Filer01 192.168.101.50:3260,1 iqn.2006-01.com.openfiler:ocrvoteFiler01 192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm02Filer01
A restart of the iscsi service will automatically log in and persist the settings (this is very wide output-works best in 1280xsomething resolution)
[root@edcnode1 ~]# service iscsi restart Stopping iSCSI daemon: iscsid dead but pid file exists [ OK ] Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260] Login to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260]: successful [ OK ]
Fine! Now over to fdisk the new devices. I know that my “local” storage is named /dev/xvd*, so anything new (“/dev/sd*”) will be iSCSI provided storage. If you are unsure you can always check the /var/log/messages file to see which device have just been discovered. You should see something similar to this output:
Sep 24 12:20:08 edcnode1 kernel: Loading iSCSI transport class v2.0-871. Sep 24 12:20:08 edcnode1 kernel: cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits. Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (cxgb3i) Sep 24 12:20:08 edcnode1 kernel: Broadcom NetXtreme II CNIC Driver cnic v2.1.0 (Oct 10, 2009) Sep 24 12:20:08 edcnode1 kernel: Broadcom NetXtreme II iSCSI Driver bnx2i v2.1.0 (Dec 06, 2009) Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (bnx2i) Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (tcp) Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (iser) Sep 24 12:20:08 edcnode1 kernel: iscsi: registered transport (be2iscsi) Sep 24 12:20:08 edcnode1 iscsid: iSCSI logger with pid=20558 started! Sep 24 12:20:08 edcnode1 kernel: scsi0 : iSCSI Initiator over TCP/IP Sep 24 12:20:08 edcnode1 kernel: scsi1 : iSCSI Initiator over TCP/IP Sep 24 12:20:08 edcnode1 kernel: scsi2 : iSCSI Initiator over TCP/IP Sep 24 12:20:09 edcnode1 kernel: Vendor: OPNFILER Model: VIRTUAL-DISK Rev: 0 Sep 24 12:20:09 edcnode1 kernel: Type: Direct-Access ANSI SCSI revision: 04 Sep 24 12:20:09 edcnode1 kernel: Vendor: OPNFILER Model: VIRTUAL-DISK Rev: 0 Sep 24 12:20:09 edcnode1 kernel: Type: Direct-Access ANSI SCSI revision: 04 Sep 24 12:20:09 edcnode1 kernel: Vendor: OPNFILER Model: VIRTUAL-DISK Rev: 0 Sep 24 12:20:09 edcnode1 kernel: Type: Direct-Access ANSI SCSI revision: 04 Sep 24 12:20:09 edcnode1 kernel: Vendor: OPNFILER Model: VIRTUAL-DISK Rev: 0 Sep 24 12:20:09 edcnode1 kernel: Type: Direct-Access ANSI SCSI revision: 04 Sep 24 12:20:09 edcnode1 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 0 Sep 24 12:20:09 edcnode1 kernel: scsi 1:0:0:0: Attached scsi generic sg1 type 0 Sep 24 12:20:09 edcnode1 kernel: scsi 2:0:0:0: Attached scsi generic sg2 type 0 Sep 24 12:20:09 edcnode1 kernel: scsi 1:0:0:1: Attached scsi generic sg3 type 0 Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: 20971520 512-byte hdwr sectors (10737 MB) Sep 24 12:20:09 edcnode1 kernel: sda: Write Protect is off Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: drive cache: write through Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: 20971520 512-byte hdwr sectors (10737 MB) Sep 24 12:20:09 edcnode1 kernel: sda: Write Protect is off Sep 24 12:20:09 edcnode1 kernel: SCSI device sda: drive cache: write through Sep 24 12:20:09 edcnode1 kernel: sda: unknown partition table Sep 24 12:20:09 edcnode1 kernel: sd 0:0:0:0: Attached scsi disk sda
The output will continue with /dev/sdb and other devices exported by the filer.
Prepare the local Oracle Installation
Using fdisk, modify /dev/xvdb, create a partition spanning the whole disk and set its type to “8e” – Linux LVM. It’s always a good idea to use LVM to install Oracle binaries into, it makes later extension of a filesystem easier. I’ll add the fdisk output here for this device but won’t for later partitioning excercises.
root@edcnode1 ~]# fdisk /dev/xvdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 1305. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): Using default value 1305 Command (m for help): t Selected partition 1 Hex code (type L to list codes): 8e Changed system type of partition 1 to 8e (Linux LVM) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Once /dev/xvdb1 is ready, we need to start its transformation into a logical volume. First, a physical volume is to be created:
[root@edcnode1 ~]# pvcreate /dev/xvdb1 Physical volume "/dev/xvdb1" successfully created
The physical volume (“PV”) is then used to form a volume group (“VG”). In real life, you’d probably have more than 1 PV to form a VG… I named my volume group “oracle_vg”. The existing volume group is called “root_vg” by the way.
[root@edcnode1 ~]# vgcreate oracle_vg /dev/xvdb1 Volume group "oracle_vg" successfully create
Wonderful! I never quite remember how many extents this VG has so I need to query it. When using –size 10g it will through an error – some internal overhead will reduce the available capacity to something just shy of 10G:
[root@edcnode1 ~]# vgdisplay oracle_vg --- Volume group --- VG Name oracle_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 10.00 GB PE Size 4.00 MB Total PE 2559 Alloc PE / Size 0 / 0 Free PE / Size 2559 / 10.00 GB VG UUID QgHgnY-Kqsl-noAR-VLgP-UXcm-WADN-VdiwO7
Right, so now let’s create a logical volume (“LV”) with 2559 extents:
[root@edcnode1 ~]# lvcreate --extents 2559 --name grid_lv oracle_vg Logical volume "grid_lv" created
And now we need a file system:
[root@edcnode1 ~]# mkfs.ext3 /dev/oracle_vg/grid_lv
You are done! Create the mountpoint for your oracle installation, /u01/ in my case, and grant oracle:oinstall ownership to it. In this lab excercise I didn’t create a separate owner for the Grid Infrastructure to avoid potentially undiscovered problems in 11.2.0.2 and stretched RAC. Finally add this to /etc/fstab to make it persistent:
[root@edcnode1 ~]# echo "/dev/oracle_vg/grid_lv /u01 ext3 defaults 0 0" >> /etc/fstab [root@edcnode1 ~]# mount /u01
Now continue to partition the iSCSI volumes, but don’t create file systems on top of them. You should not assign a partition type other than the default “Linux” to it either.
ASMLib
Yes I know…The age old argument, but I decided to use it anyway. The reason is simple: scsi_id doesn’t return a value in para-virtualised Linux, which makes it impossible to set up device name persistence with udev. And ASMLib is easier to use anyway! But if your system administrators are database agnostic and not willing to learn the basics about ASM, then probably ASMLib is not a good idea to be rolled out. It’s only a matter of time until someone executes an “rpm -Uhv kernel*” to your box and of course a) didn’t tell the DBAs and b) didn’t bother applying the ASMLib kernel module. But I digress.
Before you are able to use ASMLib you have to configure it on each cluster node. A sample session could look like this:
[root@edcnode1 ~]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]: Scan for Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: done Dropping Oracle ASMLib disks: [ OK ] Shutting down the Oracle ASMLib driver: [ OK ] [root@edcnode1 ~]#
Now with this done, it is possible to create the ASMLib maintained ASM disks. For the LUNs presented by filer01 these be
- ASM01FILER01
- ASM02FILER01
- OCR01FILER01
- OCR02FILER01
The disks are created using the /etc/init.d/oracleasm createdisk command as in these examples:
[root@edcnode1 ~]# /etc/init.d/oracleasm createdisk asm01filer01 /dev/sda1 Marking disk "asm01filer01" as an ASM disk: [ OK ] [root@edcnode1 ~]# /etc/init.d/oracleasm createdisk asm02filer01 /dev/sdc1 Marking disk "asm02filer01" as an ASM disk: [ OK ] [root@edcnode1 ~]# /etc/init.d/oracleasm createdisk ocr01filer01 /dev/sdb1 Marking disk "ocr01filer01" as an ASM disk: [ OK ] [root@edcnode1 ~]# /etc/init.d/oracleasm createdisk ocr02filer01 /dev/sdd1 Marking disk "ocr02filer01" as an ASM disk: [ OK ]
Switch over to the second node now to validate the configuration and to continue the configuration of the iSCSI LUNs from filer02. Define the domU with a similar configuration file as shown above for edcnode1, and start the domU. Once the wait for DHCP timeouts is over and you are presented with a login, set up the network as shown above. Install the iscsi initiator package, change the initiator name and discover the targets from filer02 in addition to those from filer01.
[root@edcnode2 ~]# iscsiadm -t st -p 192.168.101.51 -m discovery 192.168.101.51:3260,1 iqn.2006-01.com.openfiler:asm02Filer02 192.168.101.51:3260,1 iqn.2006-01.com.openfiler:ocrvoteFiler02 192.168.101.51:3260,1 iqn.2006-01.com.openfiler:asm01Filer02 [root@edcnode2 ~]# iscsiadm -t st -p 192.168.101.50 -m discovery 192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm01Filer01 192.168.101.50:3260,1 iqn.2006-01.com.openfiler:ocrvoteFiler01 192.168.101.50:3260,1 iqn.2006-01.com.openfiler:asm02Filer01
Still on the second node, continue the mounting of the scsi devices
[root@edcnode2 ~]# service iscsi start iscsid (pid 2802) is running... Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer02, portal: 192.168.101.51,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler02, portal: 192.168.101.51,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer02, portal: 192.168.101.51,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260] Login to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer02, portal: 192.168.101.51,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler02, portal: 192.168.101.51,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer01, portal: 192.168.101.50,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:asm01Filer02, portal: 192.168.101.51,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:ocrvoteFiler01, portal: 192.168.101.50,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:asm02Filer01, portal: 192.168.101.50,3260]: successful
Partition the disks from filer02 the same way as shown in the previous example. On edcnode2, fdisk reported the following as new disks
Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdf doesn't contain a valid partition table Disk /dev/sda: 10.6 GB, 10670309376 bytes Disk /dev/sdb: 2650 MB, 2650800128 bytes Disk /dev/sdf: 10.7 GB, 10737418240 bytes
Note that /dev/sda and /dev/sdf are the 2 10G LUNs for ASM data, and /dev/sdb is the OCR/voting disk combination. Next, create the additional ASMLib disks:
[root@edcnode2 ~]# /etc/init.d/oracleasm scandisks ... [root@edcnode2 ~]# /etc/init.d/oracleasm createdisk asm01filer02 /dev/sda1 Marking disk "asm01filer02" as an ASM disk: [ OK ] [root@edcnode2 ~]# /etc/init.d/oracleasm createdisk asm02filer02 /dev/sdf1 Marking disk "asm02filer02" as an ASM disk: [ OK ] [root@edcnode2 ~]# /etc/init.d/oracleasm createdisk ocr01filer02 /dev/sdb1 Marking disk "ocr01filer02" as an ASM disk: [ OK ] [root@edcnode2 ~]# /etc/init.d/oracleasm listdisks ASM01FILER01 ASM01FILER02 ASM02FILER01 ASM02FILER02 OCR01FILER01 OCR01FILER02 OCR02FILER01
Perform another scandisks command on edcnode1 to have all the disks:
[root@edcnode1 ~]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@edcnode1 ~]# /etc/init.d/oracleasm listdisks ASM01FILER01 ASM01FILER02 ASM02FILER01 ASM02FILER02 OCR01FILER01 OCR01FILER02 OCR02FILER01
Summary
All done!And I seriously thought initially that this was going to be a shorter post than the others, how wrong I was. Congratulations on having arrived here at the bottom of the article by the way.
In the course of this post I prepared my virtual machines to begin the installation of Grid Infrastructure. The ASM disk names will be persistent across reboots thanks to ASMLib, and no messing around with udev for that matter. You might notice that there are 2 ASM disk from filer01 but only 1 from filer02 for the voting disk/OCR diskgroup, and that’s for a reason. I’m cheeky and won’t tell you here, that’s for another post later…