Q&CE Servers¶
In the Q&CE department people have access to a number of servers which are used for different applications. ALL servers are accessible with SSH from within the TU Delft network. You can connect from outside the TU Delft with a linux bastion host (employees:linux-bastion.tudelft.nl
, students:student-linux.tudelft.nl
) or with a VPN connection. Access to the servers is only possible with a Q&CE account. All servers are hosted in the ewi.tudelft.nl
domain, so add the domainname to the servername when you want to connect to a server. If you want to use interactive X11 applications, you have to add the -X
parameter to SSH to enable X11 forwarding:
# Connect to cluster login server without X11 forwarding
ssh <qce-username>@qce-cluster-login.ewi.tudelft.nl
# Connect to qce-apps01 application server with X11 forwarding
ssh -X <qce-username>@qce-apps01.ewi.tudelft.nl
Application Servers¶
The application servers are used for interactive design applications and running simulations.
qce-apps01¶
The qce-apps01
server is mostly used for interactive design and running small simulations. To prevent users from using all the server resources with a simulation, the resource usage per user is limited. This way the server will stay responsive for people who use the qce-apps01
for designing. Running batch simulations, which can be run on the Q&CE cluster is NOT appreciated.
Name | Cores | Threads | Memory | Local Storage | CPU Speed |
---|---|---|---|---|---|
qce-apps01 | 32 | 32 | 128 GB | none | 2.8 GHz |
Warning
Resource usage on the qce-apps01
server is limited to a maximum of 8 CPU threads and 32 GB of memory per user !!
qce-icdesign¶
The qce-icdesign
server is a dedicated server for IC design workflows. It is optimized to run the IC design software of Cadence, Synopsys and Mentor Graphics. To prevent users from using all the server resources with a circuit simulation, the resource usage per user is limited. This way the server will stay responsive for people who use the qce-icdesign
for designing. Running other applications other then IC design software is NOT appreciated.
Name | Cores | Threads | Memory | Local Storage | CPU Speed |
---|---|---|---|---|---|
qce-icdesign | 64 | 128 | 256 GB | 1 TB | 2.9 GHz |
Warning
Resource usage on the qce-icdesign
server is limited to a maximum of 16 CPU threads and 32 GB of memory per user !!
qce-cluster-login¶
The qce-cluster-login
server is the login node for the Q&CE cluster. On this machine you can submit your jobs to the cluster. You can use this server for post-processing of simulation data or to test your batch jobs by running the job directly on the login node. It is not allowed to run your regular production jobs on the login node.
Name | Cores | Threads | Memory | Local Storage | CPU Speed |
---|---|---|---|---|---|
qce-cluster-login | 8 | 8 | 64 GB | 300 GB | 2.8 GHz |
Access Servers¶
The access servers are used to provide remote access with a linux X-Windows desktop environment.
qce-xportal¶
The qce-xportal
server provides a remote X-Windows desktop environment for users who need to run X-windows applications from outside the TU Delft network or who don’t have X-Windows support on their workstation. The linux desktop environment is served by X2Go and you need a X2Go client to connect to it. X2Go has clients for Windows, MacOS and Linux. See How to setup X2Go for the qce-xportal server for more information.
With X2Go you can suspend your session and reconnect to it later or from another location. You can use small desktop applications on this server, but don’t run any resource intensive design or simulation software locally.
Name | Cores | Threads | Memory | Local Storage | CPU Speed |
---|---|---|---|---|---|
qce-xportal | 4 | 4 | 16 GB | none | 2.8 GHz |
Warning
Suspended X2Go sessions which are idle for more than 10 days are terminated !!
Development Servers¶
ce-cuda01¶
Name | Cores | Threads | Memory | Local Storage | GPU | CPU Speed |
---|---|---|---|---|---|---|
ce-cuda01 | 12 | 24 | 128 GB | 1 TB |
|
2.4 GHz |
Project Servers¶
Project servers are purchased for a specific goal/project. They are usually financed by project money and therefore the access to this kind of servers is usually limited to project members only.
qce-alveo01 (limited access)¶
The qce-alveo01
server is used to develop FPGA accelerated applications with Xilinx Alveo accelerator cards. This server has two Alveo cards available, a U200 and a U280 with HBM memory. Access to this machine is for Xilinx FPGA acceleration project members only.
Name | Cores | Threads | Memory | Local Storage | FPGA | CPU Speed |
---|---|---|---|---|---|---|
qce-alveo01 | 20 | 40 | 96 GB | 1 TB |
|
2.2 GHz |
qce-power9 (limited access)¶
The qce-power9
server is used to develop FPGA accelerated applications with OpenCAPI on the OpenPower architecture. The server has two POWER9 CPUs and an AlphaData ADM-PCIE-9H7 FPGA accelarator card with HBM. Access to this machine is for OpenCAPI developers only.
Name | Cores | Threads | Memory | Local Storage | FPGA | CPU Speed |
---|---|---|---|---|---|---|
qce-power9 | 44 | 176 | 128 GB | 4 TB | PCIE-9H7 | 2.9 GHz |