Merge branch 'mlp2017-8/semester_2_materials' of https://github.com/CSTR-Edinburgh/mlpractical into mlp2017-8/semester_2_materials

This commit is contained in:
AntreasAntoniou 2018-02-05 13:46:21 +00:00
commit c3ff1774af
2 changed files with 7 additions and 7 deletions

View File

@ -30,4 +30,4 @@ export TMP=/disk/scratch/${STUDENT_ID}/
source /home/${STUDENT_ID}/miniconda3/bin/activate mlp
python network_trainer.py --batch_size 128 --epochs 200 --experiment_prefix vgg-net-emnist-sample-exp --dropout_rate 0.4 --batch_norm_use True --strided_dim_reduction True --seed 25012018
python emnist_network_trainer.py --batch_size 128 --epochs 200 --experiment_prefix vgg-net-emnist-sample-exp --dropout_rate 0.4 --batch_norm_use True --strided_dim_reduction True --seed 25012018

View File

@ -1,4 +1,4 @@
#GPU Cluster Quick-Start Guide
# GPU Cluster Quick-Start Guide
This guide is intended to guide students into the basics of using the mlp1/mlp2 GPU clusters. It is not intended to be
an exhaustive guide that goes deep into micro-details of the Slurm ecosystem. For an exhaustive guide please visit
@ -54,7 +54,7 @@ git config --global user.name "[your name]"
git config --global user.email "[matric-number]@sms.ed.ac.uk"
```
9. Now clone the mlpractical repo using ```git clone https://github.com/CSTR-Edinburgh/mlpractical.git```.
10. Checkout the mlp_tf_tutorial branch using ```git checkout mlp2017-8/mlp_tf_tutorial```.
10. Checkout the semester_2 branch using ```git checkout mlp2017-8/semester_2_materials```.
11. ```cd mlpractical``` and then install the required packages using ```pip install -r requirements_gpu.txt```.
12. Once this is done you will need to setup the MLP_DATA path using the following block of commands:
```bash
@ -72,7 +72,7 @@ export MLP_DATA_DIR=$HOME/mlpractical/data
13. This includes all of the required installations. Proceed to the next section outlining how to use the slurm cluster
management software. Please remember to clean your setup files using ```conda clean -t```
###Using Slurm
### Using Slurm
Slurm provides us with some commands that can be used to submit, delete, view, explore current jobs, nodes and resources among others.
To submit a job one needs to use ```sbatch script.sh``` which will automatically find available nodes and pass the job,
resources and restrictions required. The script.sh is the bash script containing the job that we want to run. Since we will be using the NVIDIA CUDA and CUDNN libraries
@ -86,7 +86,7 @@ To submit a job one needs to use ```sbatch script.sh``` which will automatically
#SBATCH --mem=16000 # memory in Mb
#SBATCH -o outfile # send stdout to outfile
#SBATCH -e errfile # send stderr to errfile
#SBATCH -t 0:01:00 # time requested in hour:minute:seconds
#SBATCH -t 1:00:00 # time requested in hour:minute:seconds
# Setup CUDA and CUDNN related paths
export CUDA_HOME=/opt/cuda-8.0.44
@ -161,6 +161,6 @@ cp ~/output /afs/inf.ed.ac.uk/u/s/<studentUUN>
This should directly copy the files to AFS. Furthermore one can use rsync as shown before.
##Additional Help
## Additional Help
If you require additional help as usual please post on piazza or ask in the tech support helpdesk.