Written in 2003 ... I think
I have just had the great experience of managing the purchase and installation of a fully loaded (16 1 GHz Apha CPUs, 32 MiB of memory) AlphaServer GS160. We recommended this system as the fit of hard and soft partitioning seemed to be a perfect match for the machines that this system would replace.
Even though I knew the capabilities of this class of machine, I had to do my homework on configuring it before adding it to the existing cluster. But first, we had to get it on the floor and powered up.
This class of machines is heavy. Check your floor rating. Most modern raised floors have cement-filled tiles two feet square. These are generally rated at 6000 pounds per tile. However, taking into account support settling over time, I would suggest that a conservative estimate would be 4500 pounds a tile. Don't forget to check the ratings of the piers holding up the tiles! Most modern day floors should easily take the 1000 pounds or so of computer spread over three tiles.
However, don't assume - check! A million dollars worth of computer crashing through the floor would be a major career limiting move.
Compaq's Site Preparation Guide (pdf) is a little vague on power requirements. Nominally, the machine needs two 30 amp 3-phase circuits. However, you must consider expansion cabinets. The base system comes as a logic cab, containing the CPUs and memory, and a systems cab that contains the power supplies and master PCI cages. If you have more than two PCI cages, Compaq will probably want to sell you an expansion cab, and that cab will have its own power requirements. You can also opt for the dual 3 phased switched option, that allows you to power the system and logic cabs off switched dual power (i.e., four 3-phase circuits).
I ended up ignoring the power distribution boxes in the one expansion cab that I needed and cabling the six additional 110 volt circuits to the main power distribution boxes in the systems cab.
My recommendation is to ask the person who is configuring your system to specify in writing what the power requirements are before the system is delivered.
Compaq's Site Preparation Guide gives information on the amount of heat produced by a GS160 with two system boxes. A fully loaded system produces 24000 BTU per hour. Check you air conditioning is adequate to deal with this additional heat load.
Compaq distributes ConsoleWorks as a console management system with all GS class machines. Although this system is Windows NT based, it is certainly a more up-to-date product than the CommandIT product from Computer Associates.
As mentioned, ConsoleWorks runs under Windows NT and comes preconfigured on a Compaq PC that you just have to plug in and boot. However, I had a slight problem when relocating the PC to its permanent home: the 5 volt side of the power supply decided to die. Before the field service engineer realized what the problem was, she had replaced the motherboard in the PC. Bad mistake.
It turns out that the ConsoleWorks license is tied to a range of motherboard serial numbers. The proper replacement part for a failed power supply is the entire PC!
ConsoleWorks is a nice solution in the pointy-clicky world. It's certainly going where we want with web accessibility so we can manage consoles from, for example, wireless enabled PDAs. However, I'm not sure I like depending on Windows to manage my million dollar machine...
There are two resources to balance to answer the partitioning question for this class of machine. Firstly, the amount of raw CPU power available would suggest that everything run on the machine. Secondly, "galactic" memory is a marked advantage to systems like Rdb, that can take advantage of memory shared between soft partitions.
The situation that I find myself having to deal with is a not-quite data warehouse/not-quite transaction processing system. The vast majority of CPU intensive jobs require read-only access to the database. While the transaction based processes require fast write and read.
As this system is to use Rdb for the large database, the current thought on how to partition the machine revolves around the fast access required for the transaction processing load. The machine I have available has 4 master PCI boxes, giving me maximum flexibility to partition it. Currently, we have an "initial" configuration that has half the I/O resource in the partition that will perform the transaction processing, and two other partitions, one to perform read only online queries, and one for batch processing.
We also have what I am going to call an "experimental" configuration. With this setup, the machine will still be partitioned into three, but one of the QBBs will have two of the PCI boxes attached and will be the transaction partition. The initial configuration is more flexible when it comes to moving CPU resources from one partition to another, but the second partition may pay off handsomely when it comes to having the available I/O bandwidth all attached to one box (although we would pay a penalty for redistributing the CPUs in the transaction partition due to them no longer having any local memory).
I will run the initial configuration to get a feel for how CPU resources will move between partitions under load, and to get a baseline performance snapshot. Then the second layout will be tried so we can compare the performance.
We are very pleased with the new system. There will still be a learning curve to be climbed when it comes to tuning this system for best performance, taking into consideration NUMA and some of the new OpenVMS 7.3 features. I can feel another article in my near future.
Meanwhile if you have any questions about this class of machine, feel free to contact me.