This is the main site to get information about SNOW, the DTIC's new HPC cluster. This cluster is intended to provide high performance calculation support to all DTIC members. If you are a DTIC member, or you are working somehow in any of the research projects of the DTIC you are eligible to use SNOW.
Any HPC environment is a little bit particular: things have to be done in a particular way. Please take your time to navigate through this site to become familiar with th HPC environment. Read the system description carefully and how to use correctly the cluster. Additionally, go to the Examples section to have a look on particular examples about how to do real examples.
As long as SNOW is a shared system (all DTIC users share the same cluster), you are advised to use it with caution and for research purposes only. Unauthorized uses of the cluster could imply the suspension of the service.
The function of the HPC (High Performance Computing) environments is to solve these too much complicated equations. Featuring hundreds, thousands, or even hundreds of thousands of processors working together, an enormous number of operations per second can be performed, taking mathematics, physics, chemistry, imaging, and many others, to the next level.
Rather than solving complex computational problems with the only help of the user's workstation, an HPC cluster brings the researcher hundreds and even thousand times the computating power of his/her own workstation.
As long as the cluster is a shared environment with high (but limited) resources, some kind of logical agrupation of resources and accounting mechanisms had to be setup. This is the mission of the Job Scheduler.
A job scheduler is a computer application for controlling unattended background program execution (commonly called batch processing). This way, the users send the jobs to the scheduler and the scheduler plans the execution of the user's jobs in the most efficient way. Depending on the job's demands, it will be executed one way or another.