Initially, one would assume the best way to utilize the system is to use the Parallel Toolbox, and surely some applications can fit very well to this mold. In my case, the parallelism within my code was very data-oriented and thus I couldn't get a great speedup. Instead, I simply used the cluster as a set of easily queued single-thread computers. A simple script turns a file of MATLAB commands into a series of queue submissions:
cat matcommands | while read line; do echo "#!/bin/bash" > temp.sh echo "#PBS -l nodes=1:ppn=1,walltime=1:00:00:00" >> temp.sh echo "#PBS -V" >> temp.sh echo "#PBS -N Bmatlab_s" >> temp.sh echo "cd WorkingDirectory" >> temp.sh echo "matlab -nojvm -nodisplay -nodesktop -nosplash -r \"" $line '"' >> temp.sh msub temp.sh done
The "matcommands" file is simply a series of single lines of
script(arguments); exit(); script(arguments); exit();and so on; these will be executed in parallel independently of each other. Very handy if testing a script with varying data. The script must be written such that the output is saved to a file whose name depends on the arguments. I then would download all the result files to my desktop and analyze everything there.
This method of using MATLAB on the cluster also works around the restriction that Krylov only has a 16-cpu license for the Parallel toolbox. Instead, 30 jobs would typically be processed at the same time (which I assume is simply a local policy restriction).