PAUP* is not currently a parallel application. That is, a single PAUP* analysis currently cannot be sped up by performing it in parallel. Researchers using PAUP* on the cluster are realizing speedups because they often need to run multiple independent analyses using separate instances of PAUP* with different input. Being independent, all of these analyses can be run simultaneously. The methods described below allow researchers to effectively start multiple analyses at the same time. The procedure below assumes that you are already familiar with working with PAUP* (on your personal workstation, for example).
The runjobs.paup script will be used to start your PAUP* jobs. All you need to decide is the range of datasets that you would like to analyze. runjobs.paup needs to be given the starting dataset and the ending dataset. So in order to run analyses for datasets 2-5 only (of the 10 described above), you would type:
runjobs.paup 2 5
Similarly, in order to run all 10 datasets, you would type:
runjobs.paup 1 10
The runjobs.paup script will output a series of "Job IDs" associated with your jobs (each analysis is a "job") in the batch scheduler (do not worry about keeping track of them). To see a listing of all of the jobs that are currently active on the cluster, use the qstat command. That is, type:
When your jobs are finished, they will no longer appear in this list.
When your jobs no longer appear with the qstat command, you can take a look in the directories for the finished jobs. For example, if the analyses for datasets 2, 4 and 9 appear to be finished, you can look for output in the ds2, ds4 and ds9 directories, respectively, which are all within the project_abc project directory created above.
If you can analyze each set of output files separately, you can begin transferring them back to your workstation at any time. If you need to combine the output files into one large output file in order to properly process them, you will obviously need to wait until all of the jobs have finished. At this point, if you need help combining the output files, please contact us for assistance.