Computational Resources

The DTC maintains several machines for the purpose of running computationally intensive programs interactively, such as simulations. The servers currently installed include:

Name Core(s) Memory     4 Xeon 3.0 GHz                         32GB     4 Xeon 3.0 GHz                         32GB     4 Xeon 3.0 GHz                         32GB           4 Xeon 2.4 GHz *                     64GB

*4 GTX980 Additional Processor Boards

Each core can access the entire memory on each server, and thus it is possible to run a single-core (or multiple core) programme requiring up to 32GB of RAM. Note that the other cores on that server would then need to be idle, or else their memory requirements would draw from the available pool for that server. Also note that “cat /proc/cpuinfo” will show 8 cores per server, because of hyperthreading that helps performance slightly in some cases, but there are only 4 real, independent cores on these servers.

For additional options, see the ANC computational resources page, where everything not marked ANC-only is available to non-ANC DTC students also. In particular, the DTC provides significant support to ECDF to obtain good service on their batch-processing cluster with thousands of cores, at a much lower cost than maintaining our own servers, and so ECDF should be used rather than our own servers whenever practical.

Usage Policy

Usage of the DTC compute servers is restricted to PhD and MSc student members of the Neuroinformatics and Computational Neuroscience DTC.

The DTC servers are intended to provide a base for doing interactive work on computationally complex problems. Interactive work means “something whose results you are actively waiting on, sitting at your desk”, not something you start and check up on a few hours later.   To keep the servers free for interactive use, please submit any batch jobs to ECDF instead, especially when the DTC servers are heavily loaded.

If you do start background runs or any other job that consumes a significant fraction of the machine’s resources, first verify that they will not interfere with what is already running on the machine.   I.e., check to see who else is using it, and how many cores are free, before starting any multi-core job.   Second, to preserve a reasonable response time on the command line, please use ‘nice’. For instance, rather than “simulate -time long > out &”, do “nice 15 simulate -time long > out &”. See “man nice” for more details.  Note that nice won’t help at all if your job is using lots of memory, because the system will then be swapping your job to disk and back whenever other jobs are idle, making the system unusable.  Third, while your big job runs, check periodically to make sure that the machine has not become overloaded in the meantime, and scale it back if it has.  If in any doubt about the suitability of your job, please just use ECDF, which has far more computational power and has policies and programmes for load management that help reduce contention; our own servers work well only when everyone is polite and considerate.

If you are trying to use the machine for DTC work interactively but find that long-running jobs, particularly those with high memory requirements, are interfering with your interactive work, feel free to email or talk to the owner of those jobs to ask that they be moved to systems explicitly designed for batch use, such as ECDF, or at least suspended temporarily while you do the interactive work.