Dear Future User of ETROFARM,
To process your access request, please provide the following information:
Steps to follow:
This is our standard procedure. If youâve already provided some of the requested information, thereâs no need to resend it.
Kind regards,
ict@etrovub.be
To access machines in the ETROVUB bubble (using an etrovub account), please use an OpenVPN connection.
In order to download the software, surf to https://vpn.etrovub.be and login with your etrovub credentials.
Download the latest software it offers to you and install.
Once done you can open it and connect with the same etrovub credentials (username only, no etrovub before or after), and click ok.
The connection establishes and shows:
When generating SSH keys, the private key should always be stored securely on your local machine,
while the public key is meant to be sharedâsimply copy it and send it to the other party.
Youâll also be asked whether to use a passphrase: this is optional.
Itâs recommended to use a passphrase on laptops or shared systems for extra protection,
but you can skip it if you’re setting up automation or scripts where prompts would get in the way .
Hereâs how to generate an RSA 2048 OpenSSH key pair on Windows, macOS, or Linux:
ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa
Same as macOS:
ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa
login using ssh or equivalent
on: etroflock.etrovub.be (10.0.5.202)
using your
At ETRO, advancing cutting-edge research hinges on the effective use of High-Performance Computing (HPC) resources. We recognize the need to bridge the gap between desktop computing limitations and the extensive power of HPC systems like Hydra. Our new dedicated GPU farm is designed to transform how users engage with computing resourcesâmoving from power-hungry, noisy desktops in overheated rooms to a more flexible, efficient, and scalable solution.
This upgrade is not just about new hardware; itâs about reshaping user behavior and enhancing integration with VUBâs existing HPC platforms. By aligning the GPU farm with VUBâs systems, we ensure smoother project transitions and better compatibility with Hydra, allowing researchers to focus more on innovation rather than infrastructure.
The GPU farm will help ETRO achieve cost-efficiency, reduce our environmental footprint, and support research with greater flexibility and computational powerâall without the high costs associated with commercial cloud solutions or the limitations of local desktops.
Available Resources
Hardware
We offer three nodes with the following configurations:
Partition | Node | CPU | GPU |
FARM | ETROFARM | AMD EPYC 9124 (F. 25 M.17) 2 sockets 2x16x1= 32 (logical) CPU 377GB | 4x Nvidia A100 (Ampere) Nvidia Driver: 535.183.01 CUDA: 12.2 80GB |
COOP | ETROCOOP01 ETROCOOP02 | INTEL 13th Gen Intel(R) Core(TM) i9-13900 (F.6 M.183) 1 socket 1x8x2= 16 (logical) CPU 125GB | 2x Nvidia GeForce RTX4090 (Ada Lovelace) Nvidia Driver: 535.183.01 CUDA: 12.2 24GB |
Additional Information:
Software
We provide software built with Easybuild and managed with Lmod. While we strive to offer packages similar to those available on Hydra, our node architecture and resources are different. We aim to include the most common packages and provide configurations similar to those found at Hydra via Easybuild
Easyconfigs
Users can supply their own build files if necessary.
Note: Singularity containers (as used in Hydra) are not provided.
Priority
We use SLURMâs job accounting and fairshare system to manage resource allocation and prevent monopolization.
The fairshare score reflects cluster usage and helps prioritize jobs.
Training Resources
For SLURM functionality, see the theoretical guide at SLURM:
https://slurm.schedmd.com/.
Practical documentation closely aligned with our setup can be found at Hydra Documentation:
https://hpc.vub.be/docs/
Hydra also offers regular training sessions.