Logo: Leibniz Universität Hannover

Operations concept of the central computing servers

 Access

Information about usernames, login etc can be found at the access page.

Your username and password are administrated via LDAP and as a result is identical on all compute servers in the RRZN.

We also offer consulting when one wishes to choose an appropriate system for computing or when using a special piece of software.  Our consultants can help with the preparation and implementation of projects on the RRZN compute servers.  As necessary, please contact our consulting team.

The following introduction to the cluster system gives an overview of the computing services at the RRZN.

  • Introduction to the cluster system

Login server

It is only possible to directly log into the system via the login servers Zen and Orac.

Zen is accessible from outside the LUH network.  One can then log into the login server the compute services (Orac).  Basic software packages such as email, web, text editors etc are installed on Zen for your use.

Orac is only accessible from within the LUH network.  Theserver has a quad-core processor and has 8 GB of RAM.  Orac provides acces to the much the same software as that which is provided on the batch servers.  This computer is intended mainly for login, small interactive calculattions and interactive pre- or postprocessing of simulation data.

From Orac it is possible to send batch jobs onto the batch servers (also known as compute servers).

Batch system

The so-called batch servers Pozzo, TC, Paris and CLUH are available for use.

These computers are not directly accessible.  Instead of this one can roughly say that the following method is what one uses to run simulations on these computers: one creates a text file, for example on Orac, with the commands that one wishes to run (a so-called batch script, or job script).  This one then passes to a software system (the batch system) which controls and organises the available computing resources and runs the job script at some appropriate time according to the currently available resources.

Further information can be found under Batch system.

Batch server

The batch servers have different characteristics and usage requirements.  All, however, are parallel computers and as such contain several processor cores.

Pozzo, Estragon and Centaurus are computers which have a large shared memory.  Pozzo and Estragon are "twins" and have 4 quad-core processor and 96 GB of RAM, respectively.  Centaurus is larger still and has 8 quad-core processors and 512 GB total RAM.  These computers are particularly suitable for large SMP (shared memory) applications and for serial programs with large memory requirements.

TC is a cluster containing 12 compute nodes.  Each node has 2 single core processors and 4 GB of shared main memory.  The nodes are connected to one another via a high-speed Infiniband network.  TC is a suitable choice for serial jobs, SMP applications needing only 2 threads and for small MPI applications.

CLUH is a cluster containing 16 compute nodes.  Each node has 4 single core processors with 8 GB of shared main memory.  The nodes are connected to one another via a high-speed Infiniband network.  CLUH is a suitable choice for SMP applications needing up to 4 threads as well as MPI applications.

Paris is a cluster of 11 compute nodes.  Each node has 2 quad-core processors and 64 GB of shared main memory (two nodes within the cluster have 128 GB of RAM).  The nodes are connected to one another via a high-speed Infiniband network.  Paris is a suitable choice for a wide range of application types: serial (single core) jobs with large memory requirements; SMP applications needing up to 8 threads; and small to medium MPI jobs.

If you require a large number of processor cores and or main memory for your simulations, you will need to work on either the HLRN system or one of the federal supercomputers.  You can find further information under Supercomputing (German).

Workstations

Simulation and graphics workstations are available in the terminal room at the RRZN.  On these computers one is able to work either locally or log in to Orac (the login server) and work on the cluster systems from there.

File systems

Your HOME directory is identical on all computers.  There are also other file systems available for scratch files or as working directories (for example BIGWORK or TMPDIR).

You can find more detailed information under file systems.

Operating system software

The Linux operating system (mainly Scientific Linux) is used on all computers within the cluster system.  Some of the workstations in the terminal room, however, run under Windows.

Software

There is a wide range of software available for use on the compute servers.

Further information can be found under installed software.

Work environment

The user work environment is controlled by the so-called "modules" concept.  Through the use of modules you are able to set various environment variables and thus gain access to the user software made available on the cluster system.

More information can be found under work environment.

 

Regionales Rechenzentrum für Niedersachsen - URL: www.rrzn.uni-hannover.de/betriebskonzept.html?&L=1
 
Dr. Gerd Brand, Last Change: 16.02.2012
Copyright Gottfried Wilhelm Leibniz Universität Hannover