The new lab server has arrived

Update: the Oracle Linux memory problem is solved:

As part of an on-going research project I have invested a little bit of money towards a new lab server. Sadly my Phenom II X6 with 16GB RAM was not on the HCL for ESXi 5 which I required. Before I knew that ESXi can itself be virtualised I looked at a number of options.

After much consideration and calculations I decided to buy a machine. I have already rented a Core i7-2600 with 32GB of RAM from Hetzner (which is superb value for money for their EX4-check it out!) but needed something more flexible in terms of OS installation and virtualisation. Although the team at Hetzner is most helpful and skilled, installing ESXi or Oracle VM 3.1.x would have been quite a task. Especially hardening the setup!

For some time I thought that it is no longer possible to virtualise certain Oracle setups on a ultra-mobile laptop. I managed to run a 2 node 10.2 cluster with ASM on a Think Pad T61 at the time, but when 11.2 (especially came out memory requirements pushed the laptops to their limit. Sure there are laptops that can be equipped with > 16GB RAM, but they can hardly be carried around. I needed a good compromise between battery endurance and weight, and came to the Think Pad X220 with a Core i5 2540M. This is a great laptop with 8 GB RAM and 320GB spinning disk. Thinking about it now I should have bought a SSD instead, but that can always be fitted later.

But I digress-back to the lab machine. The choices were either a consumer board, or a server/workstation setup. The consumer boards were quickly ruled out, as they are single socket only and I wanted a bit more.

The server boards for x86-64 can be separated in AMD and Intel. With AMD going down rapidly since the new processor architecture couldn’t compete with the latest Intel chips AMD has to stay in the market by offering a price advantage. The current 6200 series Interlagos chips come with 8 up to 16 cores per socket. Although I should add the “cores” in the 6200 processor are neither really cores nor threads, but that’s a different story, better told by wikipedia. The alternative is the Sandy Bridge E5-2600 series processor, better known as the processor to die for (not literally). This is currently some of the fastest you can get, and I really would have liked one. However the price tag associated with one of them is quite high-rightly so! Also the E5-2600 Xeon is made for dual socket workstations, the new 4600 series for quad-socket hasn’t been released yet at the time I made a decision to buy. And even then I wouldn’t have been able to afford one anyway.

As a regular reader of c’t, a very good German IT magazine I noticed Delta Computers in Reinbeck near Hamburg. They sell HPC kit to academic institutions but also offer hardware to everyone else. I got in touch (the website might need an overhaul) and quickly configured my system:

  • AMD 6238 Interlagos with 2 sockets @ 12 “cores” each
  • 32 GB RAM (the upgrade to 64 GB would only have been an extra 200 GBP)
  • No internal disks, I have plenty of these
  • Supermicro H8DG6 board with G34 chipset

So what’s nice about it? The price to start with, but also the IPMI 2.0 interface, the onboard SAS and SATA interface (no SATA 6g though). The case has a disk cage into which I can attach 8 SATA disks in caddies which requires no access to the mainboard and fumbline with screws.

I should also mention a little problem though: there is noise. The board has 5 fans attached I could see: 3 take the air in from the front and channel it across the CPU, the North Bridge and the memory. Two small fans suck the air out of the case and blow it out. They are small in diameter, and I had to find the BIOS setting to reduce their RPM. The familiar airplane-taking-off noise greeted me when I first powered the server up, and I thought I couldn’t possibly use it while anyone else is in the house. The case weights 35kg, and is a 4 U workstation. There is plenty of room for PCIe v2 x16 cards (3 I think) and the board takes up to 256GB of memory.

The IPMIView software from Supermicro allows me to control the server over a KVM-over-IP. I can even mount ISO images over the network which will replace the DVD ROM. It is possible to administer the box without ever plugging a keyboard, mouse, monitor or physical boot device into it.

ESXi 5 installs beautifully and is fully Interlagos-compatible. Some guides on the Internet suggest a little tweaking of the advanced settings for ESXi to enable the deep C-states, allowing unused cores to sleep when they are not used.

On the downside I couldn’t make Oracle Linux 6.2 with kernel UEK detect all the memory. In fact, of my 32GB only 3 remained. This seems to be a regression introduced in 2.6.28 (or so) which is supposedly fixed in Red Hat 6.2. There is also a list available on Supermicro’s site listing which operating systems are optimised for the Interlagos chips. There is a possibility of excessive cache invalidations in systems not aware of the 6200 architecture. ESXi 5 is aware of it, so I stick with it for the time being.

After almost a month I have to say I’m very happy with the server. It’s noisy, ok, but having headphones with good music makes it bearable. The performance is great, I have yet to try and see how well it copes with my 4 node RAC cluster and OEM 12.1 Cloud Control.


12 thoughts on “The new lab server has arrived

  1. Pingback: Today's Linux Server LinksNine OM

  2. flashdba

    Hey Martin… Now you need to attach it to a flash memory array :-)

    Thanks for the tip about the internal KVM, I have mine working now too.

  3. Imtiyaz

    Hi Martin, I have been at your presentations at UKOUG SIG’s and do follow your blog. Haven’t met you but will do soon.
    just interested to know what OS you installed on the new lab server, I mean the base OS.

    1. Martin Bach Post author

      Hi Imtiyaz, feel free to come over and say hello next time!

      Currently the lab server runs ESXi as a bare metal hypervisor. I am planning to install oracle vm, oracle Linux and more recent kernels soon.


  4. Pingback: Kernel UEK 2 on Oracle Linux 6.2 fixed lab server memory loss « Martins Blog

  5. Peter

    I was just wondering what you use for storage? You mentioned lots of sata ports, so is that what you’re doing? With or without raid controller?

    I’m about to upgrade my own lab, and is just looking for suggestions/inspiration.

    All the best

    1. Martin Bach Post author

      Storage was simple: SSD! The board has a SAS RAID controller, but SAS disks are too expensive for what I want to do. So a couple of 256 GB SSDs for the demanding virtualisation targets (RAC shared storage for example) and the rest on a bunch of data stores. I’m using ESXi 5 for that matter.

      Hope that helps,


      1. Peter Stibbons


        I was thinking SSD’s as well for pretty much all db-storage and 4-5 SATA3’s for OS, templates, isos, backups etc. I’ve got a lsi 9965-8i raid-controller that I can use for the SATA’s (i’m guessing raid5). Will see if I’ll put the SSD’s on the card as well…. If I’m feeling really adventurous I could always try cachecade.

        As for hypervisor, I think I’ll stick with OVM2.2 (if I can get the lsi-card to work) as its good enough for what I want to do. I could possibly use 3.1, but I’d like to use LVM to handle db-storage in Dom0 (and present lv’s as ‘phy’ to the VM’s for performance-reasons) and unfortunately thats not an option anymore in OVM3. But then again, if I’m using SSD’s it may not matter how the storage is presented (file or phy)

        I wouldnt mind testing VSphere/ESXi but from what I’ve read there’s only a 60 day-trial and dont want to have to rebuild the server after 2-months. I might try and run it in OVM just to get a feel for the product ;-)



      2. Martin Bach Post author

        Hi Peter,

        There is a “free” version of ESXi ( which cannot be managed by vCenter and it’s a single-box-only solution the way I read the FAQ. However you can use the vSphere client which is a fully functional solution to manage your ESXi host.

        From a storage point of view you need to remember that ESX is an enterprise product: no support for software RAID, the assumption must have been that the customer has some kind of storage backend already. Using a hardware RAID controller should work, but you ought to check the HCL to ensure that your LSI card is compatible.

        I personally think that ESX is very user friendly, easy to administer and mature. You might also benefit from hardware assisted virtualisation (AMD-V, Intel VT) and especially nested page tables which appears nearly as fast as a PVM in Xen. OVM 2.2 is slightly outdated IMO, and thanks for the information that LVM supported domU storage backends don’t seem to be an option with OVM 3.1.

        I just re-read the last paragraph and it sounds as if I was paid by VMware which I am not-just enthusiastic about their product.

        Hope this helps,


  6. Peter

    Hi Martin,

    I agree that OVM 2.2 is a bit outdated, but it gives me a bit more flexibility in terms of storage management and it’s easier to customize (like run a dns-server in Dom0 etc). Totally unnsupported but since its a lab-server it doesnt matter…
    So I’m hoping that I can get the card to work. If not I’ll go with OVM 3.1 (or try ESXi).


  7. Pingback: Lights-out management console on Supermicro boards « Martins Blog

Comments are closed.