The new lab server has arrived
Posted by Martin Bach on June 5, 2012
Update: the Oracle Linux memory problem is solved: https://martincarstenbach.wordpress.com/2012/06/13/kernel-uek2-on-oracle-linux-6-2-fixed-lab-server-memory-loss/
As part of an on-going research project I have invested a little bit of money towards a new lab server. Sadly my Phenom II X6 with 16GB RAM was not on the HCL for ESXi 5 which I required. Before I knew that ESXi can itself be virtualised I looked at a number of options.
After much consideration and calculations I decided to buy a machine. I have already rented a Core i7-2600 with 32GB of RAM from Hetzner (which is superb value for money for their EX4-check it out!) but needed something more flexible in terms of OS installation and virtualisation. Although the team at Hetzner is most helpful and skilled, installing ESXi or Oracle VM 3.1.x would have been quite a task. Especially hardening the setup!
For some time I thought that it is no longer possible to virtualise certain Oracle setups on a ultra-mobile laptop. I managed to run a 2 node 10.2 cluster with ASM on a Think Pad T61 at the time, but when 11.2 (especially 184.108.40.206) came out memory requirements pushed the laptops to their limit. Sure there are laptops that can be equipped with > 16GB RAM, but they can hardly be carried around. I needed a good compromise between battery endurance and weight, and came to the Think Pad X220 with a Core i5 2540M. This is a great laptop with 8 GB RAM and 320GB spinning disk. Thinking about it now I should have bought a SSD instead, but that can always be fitted later.
But I digress-back to the lab machine. The choices were either a consumer board, or a server/workstation setup. The consumer boards were quickly ruled out, as they are single socket only and I wanted a bit more.
The server boards for x86-64 can be separated in AMD and Intel. With AMD going down rapidly since the new processor architecture couldn’t compete with the latest Intel chips AMD has to stay in the market by offering a price advantage. The current 6200 series Interlagos chips come with 8 up to 16 cores per socket. Although I should add the “cores” in the 6200 processor are neither really cores nor threads, but that’s a different story, better told by wikipedia. The alternative is the Sandy Bridge E5-2600 series processor, better known as the processor to die for (not literally). This is currently some of the fastest you can get, and I really would have liked one. However the price tag associated with one of them is quite high-rightly so! Also the E5-2600 Xeon is made for dual socket workstations, the new 4600 series for quad-socket hasn’t been released yet at the time I made a decision to buy. And even then I wouldn’t have been able to afford one anyway.
As a regular reader of c’t, a very good German IT magazine I noticed Delta Computers in Reinbeck near Hamburg. They sell HPC kit to academic institutions but also offer hardware to everyone else. I got in touch (the website might need an overhaul) and quickly configured my system:
- AMD 6238 Interlagos with 2 sockets @ 12 “cores” each
- 32 GB RAM (the upgrade to 64 GB would only have been an extra 200 GBP)
- No internal disks, I have plenty of these
- Supermicro H8DG6 board with G34 chipset
So what’s nice about it? The price to start with, but also the IPMI 2.0 interface, the onboard SAS and SATA interface (no SATA 6g though). The case has a disk cage into which I can attach 8 SATA disks in caddies which requires no access to the mainboard and fumbline with screws.
I should also mention a little problem though: there is noise. The board has 5 fans attached I could see: 3 take the air in from the front and channel it across the CPU, the North Bridge and the memory. Two small fans suck the air out of the case and blow it out. They are small in diameter, and I had to find the BIOS setting to reduce their RPM. The familiar airplane-taking-off noise greeted me when I first powered the server up, and I thought I couldn’t possibly use it while anyone else is in the house. The case weights 35kg, and is a 4 U workstation. There is plenty of room for PCIe v2 x16 cards (3 I think) and the board takes up to 256GB of memory.
The IPMIView software from Supermicro allows me to control the server over a KVM-over-IP. I can even mount ISO images over the network which will replace the DVD ROM. It is possible to administer the box without ever plugging a keyboard, mouse, monitor or physical boot device into it.
ESXi 5 installs beautifully and is fully Interlagos-compatible. Some guides on the Internet suggest a little tweaking of the advanced settings for ESXi to enable the deep C-states, allowing unused cores to sleep when they are not used.
On the downside I couldn’t make Oracle Linux 6.2 with kernel UEK detect all the memory. In fact, of my 32GB only 3 remained. This seems to be a regression introduced in 2.6.28 (or so) which is supposedly fixed in Red Hat 6.2. There is also a list available on Supermicro’s site listing which operating systems are optimised for the Interlagos chips. There is a possibility of excessive cache invalidations in systems not aware of the 6200 architecture. ESXi 5 is aware of it, so I stick with it for the time being.
After almost a month I have to say I’m very happy with the server. It’s noisy, ok, but having headphones with good music makes it bearable. The performance is great, I have yet to try and see how well it copes with my 4 node RAC cluster and OEM 12.1 Cloud Control.