Important!
- Do NOT run intensive processes on sc headnode (NO vsnode, ipython, tensorboard...etc), they will be killed automatically
- The cluster is a shared resources, always be mindful of other users
Overview
The SC compute cluster, originally the SAIL compute cluster, aggregates research compute nodes from various groups within Stanford Computer Science and controls them via a central batch queueing system which coordinates all jobs running on the cluster. An open-sourced workload scheduler (SLURM) is exclusively responsible for managing resources such as CPU, Memory and GPU for each job.
Once you have access to use the cluster, you can submit, monitor, and manage jobs from the headnode, sc.stanford.edu. This machine should not be used for any compute-intensive work.
You can use the cluster by starting batch jobs or interactive jobs. Interactive jobs give you access to a shell on one of the nodes, from which you can execute commands by hand, whereas batch jobs run from a given shell script in the background and automatically terminate when finished.
Access
To gain access to the SC cluster, please fill out the access request form.
If you already have access to the SC cluster but needs to use another partition. (eg. rotation, faculty-approved collaboration..etc), please use the Update Group Affiliation on SC Cluster type on the access request form.
Access to the cluster is only available on the Stanford Network or via the Stanford VPN service. You will need to use the VPN in full tunnel mode, not split tunnel.
New in 2023! SC Cluster OnDemand is now available, it is meant as a supplementary and convenient way to access the SC cluster (not to replace the traditional SSH access). SC Cluster OnDemand is a web-based portal powered by a NSF funded, open-sourced, project and being used in many high-performance computing center around the world.
If you need to engage in heavy I/O operations (download large datasets) or to crawl or scrape data from the internet. Please do so via scdt.stanford.edu (SSH), see the Dedicated data-transfer node section under Cluster Storage for detail.
If you have any questions about the cluster, please send us a request at http://support.cs.stanford.edu