A Beowulf-class cluster computer, commonly referred to as a Beowulf cluster, is actually a number of "regular," i.e. consumer-based, computer systems (called nodes), linked together in a network running software that allows the computers' resources to be combined. Beowulf cluster computers were first introduced in 1994 as a result of the Beowulf Project at CESDIS. Thanks to the hard work of many people at various universities and organizations, several resources are available for the creation of a Beowulf cluster. In addition, many skilled programmers have been able to develop practical applications for Beowulf clusters. The name given to our original cluster, Valhalla, comes from Norse mythology. Valhalla is the great hall of Odin, wherein he feasts with his chosen heroes, all those who have fallen bravely in battle, for all who die a peaceful death are excluded. We chose this name because our cluster began as a collection of "retired" computers.. They represented those who served the University well and were chosen to become part of the beginnings of a supercomputer (the great hall of Computers).
Today, Grethor consists of 16 Hewlett-Packard DL165G5 computers, each with four dual-core processors and 18 Hewlett-Packard DL165G6 computers, each with four six-core processors, for a total of 560 cores. It is configured with 2 GB of RAM per core for a total of 1.12 TB of RAM. We have 9TB of primary disk storage served with NFS. Interconnects are currently Gigabit Ethernet and wie will soon be adding 2X InfiniBand. We use the ROCKS distribution of operating and management software fuom the University of California-Berkeley.
Grethor uses the OpenMPI implementation of the Message Passing Interface.
All HPCC facilities are for the use of UMSL faculty, students, staff and their collaborators only. Facilities are funded by a combination of faculty contributions(which give guaranteed access to resources) and ITS funds. Please feel free to comment, ask questions, or contribute by sending e-mail to email@example.com.