UAntwerp Tier-2 Clusters#

Login infrastructure#

You can log in to the CalcUA infrastructure in 2 ways:

  • using SSH via login.hpc.uantwerpen.be,

  • or using the webportal at portal.hpc.uantwerpen.be.

When using SSH, you can also log in directly to the login nodes of the individual clusters using one of the following hostnames.

Cluster

Generic login name

Individual login node

Vaughan

login-vaughan.hpc.uantwerpen.be

login1-vaughan.hpc.uantwerpen.be
login2-vaughan.hpc.uantwerpen.be

Leibniz

login-leibniz.hpc.uantwerpen.be
login.hpc.uantwerpen.be
login1-leibniz.hpc.uantwerpen.be
login2-leibniz.hpc.uantwerpen.be

Visualization

viz1-leibniz.hpc.uantwerpen.be

Breniac

login-breniac.hpc.uantwerpen.be

Note

Direct login is possible to all login nodes and to the visualization node from within Belgium only. From outside of Belgium, a VPN connection to the UAntwerp network is required.

Compute clusters#

The CalcUA infrastructure contains 3 compute clusters. Partitions in bold are the default partition for the corresponding cluster.

Vaughan#

Partition

Nodes

CPU-GPU

Memory

Maximum wall time

zen2

152

2x 32-core AMD EPYC 7452

256 GB

3 days

zen3

24

2x 32-core AMD EPYC 7543

256 GB

3 days

zen3_512

16

2x 32-core AMD EPYC 7543

512 GB

3 days

ampere_gpu

1

2x 32-core AMD EPYC 7452
4x NVIDIA A100 (Ampere) 40 GB SXM4

256 GB

1 day

arcturus_gpu

2

2x 32-core AMD EPYC 7452
2x AMD Instinct MI100 (Arcturus) 32 GB HBM2

256 GB

1 day

Leibniz#

Partition

Nodes

CPU-GPU

Memory

Maximum wall time

broadwell

144

2x 14-core Intel Xeon E5-2680v4

128 GB

3 days

broadwell_256

8

2x 14-core Intel Xeon E5-2680v4

256 GB

3 days

pascal_gpu

2

2x NVIDIA Tesla P100 (Pascal) 16 GB HBM2

128 GB

1 day

Breniac#

Partition

Nodes

CPU

Memory

Maximum wall time

skylake

23

2x 14-core Intel Xeon Gold 6132

192 GB

7 days

Storage infrastructure#

The storage is organised according to the VSC storage guidelines.

Environment variable

Type

Access

Backup

Default quota

Total capacity

$VSC_HOME

NFS/XFS

VSC

YES

3 GB, 20k files

3.5 TB

$VSC_DATA

NFS/XFS

VSC

YES

25 GB, 100k files

60 TB

$VSC_SCRATCH
$VSC_SCRATCH_SITE

BeeGFS

Site

NO

50 GB, 100k files

0.6 PB

$VSC_SCRATCH_NODE

ext4

Node

NO

node-specific

For node-specific details, see the hardware information for Vaughan, Leibniz and Breniac.

See also

For more information on the file systems, please see the UAntwerp storage page.