|Students||Prospective Students||Alumni/Friends||Faculty/Staff||Business and Government||Visitors|
The Auburn University, Samuel Ginn College of Engineering Computational Cluster is built using Dell M1000E Blade Chassis Server Platform. The Cluster consists of four M1000E Blade Chassis Fat Nodes, each utilizing sixteen M610 half-height Intel dual socket quad-core Nehalem 2.80 GHz processors, 24GB RAM, two 160GB SATA drives and single Operating System image (CentOS). Each M610 blade server is connected internally to the chassis via a Mellanox Quad Data Rate (QDR) InfiniBand switch for creation of the ScaleMP vSMP Foundation solution stack. Each M1000E Fat Node is interconnected via 10 GbE Ethernet using M6220 blade switch stacking modules for parallel clustering using OpenMPI. Each M1000E Fat Node also has independent 10GbE Ethernet connectivity to the Brocade Turboiron 24X Core LAN Switch for login access to each Fat Node, if desired, and consistent NFS mounting of the external persistent storage. This Solution Stack provides each node with 128 cores @ 2.80 GHz Nehalem, 384GB RAM and 5.1TB RAW internal storage with a total of 512 cores @ 2.80 GHz, 1.536TB shared memory RAM, and 20.48TB RAW internal storage. HPCC Theoretical Performance calculated @ 5.735 teraflops with memory bandwidth at 90GBs for up to 128 cores and 35GBs between 128 core segments.
Persistent external storage is provided using a Dell PowerVault MD3000 SAS connected 15TB SATA RAW capacity via 10 GbE NFS connectivity to each fat node via R610 NFS Server.
Battery backup for the cluster will be provided by four APC Smart-UPS SURT8000RMXLT6U UPS (6400 Watts / 8000 VA, Input 208V / Output 208V, Interface Port DB-9 RS-232, RJ-45 10/100 Base-T, Smart-Slot, Extended runtime model, Rack Height 6 U). This UPS will provide single phase 208V battery-backed power to each component in the cluster architecture.
Click on the link to view a larger version.