1057 lines
96 KiB
Plaintext
1057 lines
96 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Introduction\n",
|
|
"\n",
|
|
"This tutorial focuses on implementation of three reqularisaion techniques, two of them are norm based approaches which are added to optimised objective and the third technique, called *droput*, is a form of noise injection by random corruption of information carried by hidden units during training.\n",
|
|
"\n",
|
|
"\n",
|
|
"## Virtual environments\n",
|
|
"\n",
|
|
"Before you proceed onwards, remember to activate your virtual environment:\n",
|
|
" * If you were in last week's Tuesday or Wednesday group type `activate_mlp` or `source ~/mlpractical/venv/bin/activate`\n",
|
|
" * If you were in the Monday group:\n",
|
|
" + and if you have chosen the **comfy** way type: `workon mlpractical`\n",
|
|
" + and if you have chosen the **generic** way, `source` your virutal environment using `source` and specyfing the path to the activate script (you need to localise it yourself, there were not any general recommendations w.r.t dir structure and people have installed it in different places, usually somewhere in the home directories. If you cannot easily find it by yourself, use something like: `find . -iname activate` ):\n",
|
|
"\n",
|
|
"## Syncing the git repository\n",
|
|
"\n",
|
|
"Look <a href=\"https://github.com/CSTR-Edinburgh/mlpractical/blob/master/gitFAQ.md\">here</a> for more details. But in short, we recommend to create a separate branch for this lab, as follows:\n",
|
|
"\n",
|
|
"1. Enter the mlpractical directory `cd ~/mlpractical/repo-mlp`\n",
|
|
"2. List the branches and check which is currently active by typing: `git branch`\n",
|
|
"3. If you have followed our recommendations, you should be in the `coursework1` branch, please commit your local changed to the repo index by typing:\n",
|
|
"```\n",
|
|
"git commit -am \"finished coursework\"\n",
|
|
"```\n",
|
|
"4. Now you can switch to `master` branch by typing: \n",
|
|
"```\n",
|
|
"git checkout master\n",
|
|
" ```\n",
|
|
"5. To update the repository (note, assuming master does not have any conflicts), if there are some, have a look <a href=\"https://github.com/CSTR-Edinburgh/mlpractical/blob/master/gitFAQ.md\">here</a>\n",
|
|
"```\n",
|
|
"git pull\n",
|
|
"```\n",
|
|
"6. And now, create the new branch & swith to it by typing:\n",
|
|
"```\n",
|
|
"git checkout -b lab4\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Regularisation\n",
|
|
"\n",
|
|
"Regularisation add some *complexity term* to the cost function. It's purpose is to put some prior on the model's parameters. The most common prior is perhaps the one which assumes smoother solutions (the one which are not able to fit training data too well) are better as they are more likely to better generalise to unseen data. \n",
|
|
"\n",
|
|
"A way to incorporate such prior in the model is to add some term that penalise certain configurations of the parameters -- either from growing too large ($L_2$) or the one that prefers solution that could be modelled with less parameters ($L_1$), hence encouraging some parameters to become 0. One can, of course, combine many such priors when optimising the model, however, in the lab we shall use $L_1$ and/or $L_2$ priors.\n",
|
|
"\n",
|
|
"They can be easily incorporated into the training objective by adding some additive terms, as follows:\n",
|
|
"\n",
|
|
"(1) $\n",
|
|
" \\begin{align*}\n",
|
|
" E^n &= \\underbrace{E^n_{\\text{train}}}_{\\text{data term}} + \n",
|
|
" \\underbrace{\\beta_{L_1} E^n_{L_1}}_{\\text{prior term}} + \\underbrace{\\beta_{L_2} E^n_{L_2}}_{\\text{prior term}}\n",
|
|
"\\end{align*}\n",
|
|
"$\n",
|
|
"\n",
|
|
"where $ E^n_{\\text{train}} = - \\sum_{k=1}^K t^n_k \\ln y^n_k $, $\\beta_{L_1}$ and $\\beta_{L_2}$ some non-negative constants specified a priori (hyper-parameters) and $E^n_{L_1}$ and $E^n_{L_2}$ norm metric specifying certain properties of parameters:\n",
|
|
"\n",
|
|
"(2) $\n",
|
|
" \\begin{align*}\n",
|
|
" E^n_{L_p}(\\mathbf{W}) = \\left ( \\sum_{i,j \\in \\mathbf{W}} |w_{i,j}|^p \\right )^{\\frac{1}{p}}\n",
|
|
"\\end{align*}\n",
|
|
"$\n",
|
|
"\n",
|
|
"where $p$ denotes the norm-order (for regularisation either 1 or 2). (TODO: explain here why we usualy skip square root for p=2)\n",
|
|
"\n",
|
|
"## $L_{p=2}$ (Weight Decay)\n",
|
|
"\n",
|
|
"(3) $\n",
|
|
" \\begin{align*}\n",
|
|
" E^n &= \\underbrace{E^n_{\\text{train}}}_{\\text{data term}} + \n",
|
|
" \\underbrace{\\beta E^n_{L_2}}_{\\text{prior term}} = E^n_{\\text{train}} + \\beta_{L_2} \\frac{1}{2}|w_i|^2\n",
|
|
"\\end{align*}\n",
|
|
"$\n",
|
|
"\n",
|
|
"(4) $\n",
|
|
"\\begin{align*}\\frac{\\partial E^n}{\\partial w_i} &= \\frac{\\partial (E^n_{\\text{train}} + \\beta_{L_2} E_{L_2}) }{\\partial w_i} \n",
|
|
" = \\left( \\frac{\\partial E^n_{\\text{train}}}{\\partial w_i} + \\beta_{L_2} \\frac{\\partial\n",
|
|
" E_{L_2}}{\\partial w_i} \\right) \n",
|
|
" = \\left( \\frac{\\partial E^n_{\\text{train}}}{\\partial w_i} + \\beta_{L_2} w_i \\right)\n",
|
|
"\\end{align*}\n",
|
|
"$\n",
|
|
"\n",
|
|
"(5) $\n",
|
|
"\\begin{align*}\n",
|
|
" \\Delta w_i &= -\\eta \\left( \\frac{\\partial E^n_{\\text{train}}}{\\partial w_i} + \\beta_{L_2} w_i \\right) \n",
|
|
"\\end{align*}\n",
|
|
"$\n",
|
|
"\n",
|
|
"where $\\eta$ is learning rate.\n",
|
|
"\n",
|
|
"## $L_{p=1}$ (Sparsity)\n",
|
|
"\n",
|
|
"(6) $\n",
|
|
" \\begin{align*}\n",
|
|
" E^n &= \\underbrace{E^n_{\\text{train}}}_{\\text{data term}} + \n",
|
|
" \\underbrace{\\beta E^n_{L_1}}_{\\text{prior term}} \n",
|
|
" = E^n_{\\text{train}} + \\beta_{L_1} |w_i|\n",
|
|
"\\end{align*}\n",
|
|
"$\n",
|
|
"\n",
|
|
"(7) $\\begin{align*}\n",
|
|
" \\frac{\\partial E^n}{\\partial w_i} = \\frac{\\partial E^n_{\\text{train}}}{\\partial w_i} + \\beta_{L_1} \\frac{\\partial E_{L_1}}{\\partial w_i} = \\frac{\\partial E^n_{\\text{train}}}{\\partial w_i} + \\beta_{L_1} \\mbox{sgn}(w_i)\n",
|
|
"\\end{align*}\n",
|
|
"$\n",
|
|
"\n",
|
|
"(8) $\\begin{align*}\n",
|
|
" \\Delta w_i &= -\\eta \\left( \\frac{\\partial E^n_{\\text{train}}}{\\partial w_i} + \\beta_{L_1} \\mbox{sgn}(w_i) \\right) \n",
|
|
"\\end{align*}$\n",
|
|
"\n",
|
|
"Where $\\mbox{sgn}(w_i)$ is the sign of $w_i$: $\\mbox{sgn}(w_i) = 1$ if $w_i>0$ and $\\mbox{sgn}(w_i) = -1$ if $w_i<0$\n",
|
|
"\n",
|
|
"One can also apply those penalty terms for biases, however, this is usually not necessary as biases have secondary impact on smoothnes of the given solution.\n",
|
|
"\n",
|
|
"## Dropout\n",
|
|
"\n",
|
|
"Dropout, for a given layer's output $\\mathbf{h}^i \\in \\mathbb{R}^{BxH^l}$ (where $B$ is batch size and $H^l$ is the $l$-th layer output dimensionality) implements the following transformation:\n",
|
|
"\n",
|
|
"(9) $\\mathbf{\\hat h}^l = \\mathbf{d}^l\\circ\\mathbf{h}^l$\n",
|
|
"\n",
|
|
"where $\\circ$ denotes an elementwise product and $\\mathbf{d}^l \\in \\{0,1\\}^{BxH^i}$ is a matrix in which $d^l_{ij}$ element is sampled from the Bernoulli distribution:\n",
|
|
"\n",
|
|
"(10) $d^l_{ij} \\sim \\mbox{Bernoulli}(p^l_d)$\n",
|
|
"\n",
|
|
"with $0<p^l_d<1$ denoting the probability the given unit is kept unchanged (dropping probability is thus $1-p^l_d$). We ignore here edge scenarios where $p^l_d=1$ and there is no dropout applied (and the training would be exactly the same as in standard SGD) and $p^l_d=0$ where all units would have been dropped, hence the model would not learn anything.\n",
|
|
"\n",
|
|
"The probability $p^l_d$ is a hyperparameter (like learning rate) meaning it needs to be provided before training and also very often tuned for the given task. As the notation suggest, it can be specified separately for each layer, including scenario where $l=0$ when some random input features (pixels in the image for MNIST) are being also ommitted.\n",
|
|
"\n",
|
|
"### Keeping the $l$-th layer output $\\mathbf{\\hat h}^l$ (input to the upper layer) appropiately scaled at test-time\n",
|
|
"\n",
|
|
"The other issue one needs to take into account is the mismatch that arises between training and test (runtime) stages when dropout is applied. It is due to the fact that droput is not applied when testing hence the average input to the unit in upper layer is going to be bigger when compared to training stage (where some inputs are set to 0), in average $1/p^l_d$ times bigger. \n",
|
|
"\n",
|
|
"So to account for this mismatch one could either:\n",
|
|
"\n",
|
|
"1. When training is finished scale the final weight matrices $\\mathbf{W}^l, l=1,\\ldots,L$ by $p^{l-1}_d$ (remember, $p^{0}_d$ is the probability related to the input features)\n",
|
|
"2. Scale the activations in equation (9) during training, that is, for each mini-batch multiply $\\mathbf{\\hat h}^l$ by $1/p^l_d$ to compensate for dropped units and then at run-time use the model as usual, **without** scaling. Make sure the $1/p^l_d$ scaler is taken into account for both forward and backward passes.\n",
|
|
"\n",
|
|
"\n",
|
|
"Our recommendation is option 2 as it will make some things easier from implementation perspective. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"INFO:root:Initialising data providers...\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"import numpy\n",
|
|
"import logging\n",
|
|
"from mlp.dataset import MNISTDataProvider\n",
|
|
"\n",
|
|
"logger = logging.getLogger()\n",
|
|
"logger.setLevel(logging.INFO)\n",
|
|
"logger.info('Initialising data providers...')\n",
|
|
"\n",
|
|
"train_dp = MNISTDataProvider(dset='train', batch_size=10, max_num_batches=100, randomize=True)\n",
|
|
"valid_dp = MNISTDataProvider(dset='valid', batch_size=10000, max_num_batches=-10, randomize=False)\n",
|
|
"test_dp = MNISTDataProvider(dset='eval', batch_size=10000, max_num_batches=-10, randomize=False)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"INFO:root:Training started...\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Training cost (ce) for initial model is 2.624. Accuracy is 8.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Validation cost (ce) for initial model is 2.554. Accuracy is 9.84%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Training cost (ce) is 2.932. Accuracy is 60.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Validation cost (ce) is 0.662. Accuracy is 77.58%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Took 10 seconds. Training speed 306 pps. Validation speed 1506 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Training cost (ce) is 0.503. Accuracy is 84.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Validation cost (ce) is 0.480. Accuracy is 85.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Took 11 seconds. Training speed 205 pps. Validation speed 1593 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Training cost (ce) is 0.350. Accuracy is 88.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Validation cost (ce) is 0.424. Accuracy is 86.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Took 10 seconds. Training speed 278 pps. Validation speed 1502 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Training cost (ce) is 0.233. Accuracy is 93.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Validation cost (ce) is 0.443. Accuracy is 87.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Took 10 seconds. Training speed 263 pps. Validation speed 1508 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Training cost (ce) is 0.184. Accuracy is 94.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Validation cost (ce) is 0.418. Accuracy is 87.77%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Took 11 seconds. Training speed 257 pps. Validation speed 1511 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Training cost (ce) is 0.135. Accuracy is 96.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Validation cost (ce) is 0.415. Accuracy is 88.59%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Took 11 seconds. Training speed 212 pps. Validation speed 1511 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Training cost (ce) is 0.094. Accuracy is 97.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Validation cost (ce) is 0.403. Accuracy is 89.35%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Took 11 seconds. Training speed 226 pps. Validation speed 1621 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Training cost (ce) is 0.066. Accuracy is 98.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Validation cost (ce) is 0.400. Accuracy is 89.45%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Took 11 seconds. Training speed 238 pps. Validation speed 1495 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Training cost (ce) is 0.054. Accuracy is 99.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Validation cost (ce) is 0.399. Accuracy is 89.42%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Took 10 seconds. Training speed 307 pps. Validation speed 1543 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Training cost (ce) is 0.042. Accuracy is 99.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Validation cost (ce) is 0.398. Accuracy is 89.59%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Took 11 seconds. Training speed 245 pps. Validation speed 1520 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Training cost (ce) is 0.032. Accuracy is 99.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Validation cost (ce) is 0.401. Accuracy is 90.02%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Took 11 seconds. Training speed 204 pps. Validation speed 1532 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Training cost (ce) is 0.025. Accuracy is 99.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Validation cost (ce) is 0.402. Accuracy is 90.23%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Took 11 seconds. Training speed 237 pps. Validation speed 1511 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Training cost (ce) is 0.020. Accuracy is 99.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Validation cost (ce) is 0.406. Accuracy is 90.01%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Took 11 seconds. Training speed 219 pps. Validation speed 1515 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Training cost (ce) is 0.017. Accuracy is 99.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Validation cost (ce) is 0.415. Accuracy is 89.85%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Took 10 seconds. Training speed 257 pps. Validation speed 1605 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Training cost (ce) is 0.015. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Validation cost (ce) is 0.423. Accuracy is 89.95%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Took 11 seconds. Training speed 205 pps. Validation speed 1613 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Training cost (ce) is 0.013. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Validation cost (ce) is 0.421. Accuracy is 90.05%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Took 11 seconds. Training speed 223 pps. Validation speed 1515 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Training cost (ce) is 0.011. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Validation cost (ce) is 0.427. Accuracy is 90.09%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Took 10 seconds. Training speed 309 pps. Validation speed 1541 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Training cost (ce) is 0.010. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Validation cost (ce) is 0.432. Accuracy is 90.02%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Took 11 seconds. Training speed 220 pps. Validation speed 1598 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Training cost (ce) is 0.009. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Validation cost (ce) is 0.429. Accuracy is 89.99%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Took 11 seconds. Training speed 216 pps. Validation speed 1543 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Training cost (ce) is 0.008. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Validation cost (ce) is 0.433. Accuracy is 90.12%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Took 11 seconds. Training speed 224 pps. Validation speed 1449 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Training cost (ce) is 0.008. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Validation cost (ce) is 0.440. Accuracy is 90.07%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Took 11 seconds. Training speed 220 pps. Validation speed 1548 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Training cost (ce) is 0.007. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Validation cost (ce) is 0.435. Accuracy is 90.19%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Took 10 seconds. Training speed 269 pps. Validation speed 1522 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Training cost (ce) is 0.007. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Validation cost (ce) is 0.440. Accuracy is 90.08%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Took 11 seconds. Training speed 233 pps. Validation speed 1488 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Training cost (ce) is 0.006. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Validation cost (ce) is 0.441. Accuracy is 90.11%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Took 10 seconds. Training speed 253 pps. Validation speed 1626 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Training cost (ce) is 0.006. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Validation cost (ce) is 0.445. Accuracy is 90.12%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Took 10 seconds. Training speed 309 pps. Validation speed 1522 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Training cost (ce) is 0.006. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Validation cost (ce) is 0.446. Accuracy is 90.11%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Took 11 seconds. Training speed 226 pps. Validation speed 1626 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Training cost (ce) is 0.005. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Validation cost (ce) is 0.450. Accuracy is 90.04%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Took 11 seconds. Training speed 228 pps. Validation speed 1539 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Training cost (ce) is 0.005. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Validation cost (ce) is 0.449. Accuracy is 90.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Took 11 seconds. Training speed 209 pps. Validation speed 1626 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Training cost (ce) is 0.005. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Validation cost (ce) is 0.449. Accuracy is 90.14%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Took 11 seconds. Training speed 211 pps. Validation speed 1522 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Training cost (ce) is 0.004. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Validation cost (ce) is 0.449. Accuracy is 90.24%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Took 11 seconds. Training speed 200 pps. Validation speed 1640 pps.\n",
|
|
"INFO:root:Testing the model on test set:\n",
|
|
"INFO:root:MNIST test set accuracy is 89.58 %, cost (ce) is 0.454\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"#Baseline experiment\n",
|
|
"\n",
|
|
"from mlp.layers import MLP, Linear, Sigmoid, Softmax #import required layer types\n",
|
|
"from mlp.optimisers import SGDOptimiser #import the optimiser\n",
|
|
"\n",
|
|
"from mlp.costs import CECost #import the cost we want to use for optimisation\n",
|
|
"from mlp.schedulers import LearningRateFixed\n",
|
|
"\n",
|
|
"logger = logging.getLogger()\n",
|
|
"logger.setLevel(logging.INFO)\n",
|
|
"rng = numpy.random.RandomState([2015,10,10])\n",
|
|
"\n",
|
|
"#some hyper-parameters\n",
|
|
"nhid = 800\n",
|
|
"learning_rate = 0.5\n",
|
|
"max_epochs = 30\n",
|
|
"cost = CECost()\n",
|
|
" \n",
|
|
"stats = []\n",
|
|
"for layer in xrange(1, 2):\n",
|
|
"\n",
|
|
" train_dp.reset()\n",
|
|
" valid_dp.reset()\n",
|
|
" test_dp.reset()\n",
|
|
" \n",
|
|
" #define the model\n",
|
|
" model = MLP(cost=cost)\n",
|
|
" model.add_layer(Sigmoid(idim=784, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" for i in xrange(1, layer):\n",
|
|
" logger.info(\"Stacking hidden layer (%s)\" % str(i+1))\n",
|
|
" model.add_layer(Sigmoid(idim=nhid, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" model.add_layer(Softmax(idim=nhid, odim=10, rng=rng))\n",
|
|
"\n",
|
|
" # define the optimiser, here stochasitc gradient descent\n",
|
|
" # with fixed learning rate and max_epochs\n",
|
|
" lr_scheduler = LearningRateFixed(learning_rate=learning_rate, max_epochs=max_epochs)\n",
|
|
" optimiser = SGDOptimiser(lr_scheduler=lr_scheduler)\n",
|
|
"\n",
|
|
" logger.info('Training started...')\n",
|
|
" tr_stats, valid_stats = optimiser.train(model, train_dp, valid_dp)\n",
|
|
"\n",
|
|
" logger.info('Testing the model on test set:')\n",
|
|
" tst_cost, tst_accuracy = optimiser.validate(model, test_dp)\n",
|
|
" logger.info('MNIST test set accuracy is %.2f %%, cost (%s) is %.3f'%(tst_accuracy*100., cost.get_name(), tst_cost))\n",
|
|
" \n",
|
|
" stats.append((tr_stats, valid_stats, (tst_cost, tst_accuracy)))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Exercise 1: Implement L1 based regularisation\n",
|
|
"\n",
|
|
"Implement L1 regularisation penalty (just for weight matrices, ignore biases). Test your solution on one hidden layer model similar to the one from coursework's Task 4 (800 hidden units) but limit training data to 10 000 (random) data-points (keep validation and test sets the same). First build and train not-regularised model as a basline. Then train regularised model starting with $\\beta_{L1}$ set to 0.001 and do some grid search for better values. Plot validation accuracies as a function of epochs for each model (each $\\beta_{L1}$ you tried).\n",
|
|
"\n",
|
|
"Implementation tips:\n",
|
|
"* Have a look at the constructor of mlp.optimiser.SGDOptimiser class, it has been modified to take more optimisation-related arguments.\n",
|
|
"* The best place to implement regularisation terms is `pgrads` method of mlp.layers.Layer (sub)-classes (look at equations (5) and (8) to see why). Some modifications are also required in `train_epoch`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"ERROR: Line magic function `%autoreload` not found.\n",
|
|
"INFO:root:Training started...\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Training cost (ce) for initial model is 8.934. Accuracy is 8.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Validation cost (ce) for initial model is 8.863. Accuracy is 9.84%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Training cost (ce) is 9.197. Accuracy is 58.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Validation cost (ce) is 6.979. Accuracy is 79.61%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Took 11 seconds. Training speed 219 pps. Validation speed 1518 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Training cost (ce) is 6.832. Accuracy is 84.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Validation cost (ce) is 6.781. Accuracy is 86.57%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Took 10 seconds. Training speed 268 pps. Validation speed 1637 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Training cost (ce) is 6.638. Accuracy is 89.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Validation cost (ce) is 6.763. Accuracy is 86.27%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Took 11 seconds. Training speed 220 pps. Validation speed 1520 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Training cost (ce) is 6.545. Accuracy is 92.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Validation cost (ce) is 6.839. Accuracy is 84.72%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Took 11 seconds. Training speed 230 pps. Validation speed 1495 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Training cost (ce) is 6.465. Accuracy is 94.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Validation cost (ce) is 6.831. Accuracy is 83.27%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Took 11 seconds. Training speed 216 pps. Validation speed 1616 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Training cost (ce) is 6.393. Accuracy is 96.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Validation cost (ce) is 6.674. Accuracy is 88.24%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Took 11 seconds. Training speed 212 pps. Validation speed 1502 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Training cost (ce) is 6.332. Accuracy is 98.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Validation cost (ce) is 6.616. Accuracy is 89.67%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Took 11 seconds. Training speed 220 pps. Validation speed 1493 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Training cost (ce) is 6.284. Accuracy is 99.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Validation cost (ce) is 6.635. Accuracy is 88.19%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Took 11 seconds. Training speed 225 pps. Validation speed 1624 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Training cost (ce) is 6.241. Accuracy is 99.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Validation cost (ce) is 6.599. Accuracy is 89.06%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Took 10 seconds. Training speed 263 pps. Validation speed 1536 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Training cost (ce) is 6.204. Accuracy is 99.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Validation cost (ce) is 6.548. Accuracy is 90.05%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Took 11 seconds. Training speed 232 pps. Validation speed 1541 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Training cost (ce) is 6.171. Accuracy is 99.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Validation cost (ce) is 6.534. Accuracy is 89.84%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Took 11 seconds. Training speed 223 pps. Validation speed 1515 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Training cost (ce) is 6.135. Accuracy is 99.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Validation cost (ce) is 6.511. Accuracy is 89.93%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Took 12 seconds. Training speed 207 pps. Validation speed 1488 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Training cost (ce) is 6.104. Accuracy is 99.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Validation cost (ce) is 6.497. Accuracy is 89.85%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Took 11 seconds. Training speed 210 pps. Validation speed 1488 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Training cost (ce) is 6.072. Accuracy is 99.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Validation cost (ce) is 6.512. Accuracy is 88.62%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Took 11 seconds. Training speed 237 pps. Validation speed 1497 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Training cost (ce) is 6.046. Accuracy is 99.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Validation cost (ce) is 6.443. Accuracy is 89.84%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Took 11 seconds. Training speed 227 pps. Validation speed 1490 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Training cost (ce) is 6.011. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Validation cost (ce) is 6.416. Accuracy is 89.92%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Took 10 seconds. Training speed 261 pps. Validation speed 1541 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Training cost (ce) is 5.982. Accuracy is 99.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Validation cost (ce) is 6.390. Accuracy is 89.72%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Took 11 seconds. Training speed 218 pps. Validation speed 1499 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Training cost (ce) is 5.952. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Validation cost (ce) is 6.358. Accuracy is 89.97%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Took 11 seconds. Training speed 208 pps. Validation speed 1518 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Training cost (ce) is 5.923. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Validation cost (ce) is 6.332. Accuracy is 90.23%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Took 11 seconds. Training speed 210 pps. Validation speed 1600 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Training cost (ce) is 5.894. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Validation cost (ce) is 6.311. Accuracy is 90.03%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Took 11 seconds. Training speed 209 pps. Validation speed 1506 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Training cost (ce) is 5.864. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Validation cost (ce) is 6.286. Accuracy is 89.97%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Took 11 seconds. Training speed 209 pps. Validation speed 1546 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Training cost (ce) is 5.835. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Validation cost (ce) is 6.255. Accuracy is 90.04%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Took 11 seconds. Training speed 211 pps. Validation speed 1495 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Training cost (ce) is 5.807. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Validation cost (ce) is 6.227. Accuracy is 90.08%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Took 10 seconds. Training speed 268 pps. Validation speed 1626 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Training cost (ce) is 5.778. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Validation cost (ce) is 6.203. Accuracy is 90.03%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Took 11 seconds. Training speed 210 pps. Validation speed 1502 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Training cost (ce) is 5.749. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Validation cost (ce) is 6.176. Accuracy is 90.06%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Took 11 seconds. Training speed 205 pps. Validation speed 1515 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Training cost (ce) is 5.720. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Validation cost (ce) is 6.149. Accuracy is 90.01%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Took 11 seconds. Training speed 215 pps. Validation speed 1495 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Training cost (ce) is 5.691. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Validation cost (ce) is 6.125. Accuracy is 90.09%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Took 12 seconds. Training speed 198 pps. Validation speed 1508 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Training cost (ce) is 5.663. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Validation cost (ce) is 6.097. Accuracy is 89.93%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Took 11 seconds. Training speed 207 pps. Validation speed 1518 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Training cost (ce) is 5.634. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Validation cost (ce) is 6.070. Accuracy is 90.09%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Took 11 seconds. Training speed 218 pps. Validation speed 1515 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Training cost (ce) is 5.606. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Validation cost (ce) is 6.043. Accuracy is 90.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Took 10 seconds. Training speed 268 pps. Validation speed 1543 pps.\n",
|
|
"INFO:root:Testing the model on test set:\n",
|
|
"INFO:root:MNIST test set accuracy is 89.25 %, cost (ce) is 0.451\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"%autoreload\n",
|
|
"\n",
|
|
"import numpy\n",
|
|
"import logging\n",
|
|
"\n",
|
|
"from mlp.layers import MLP, Linear, Sigmoid, Softmax #import required layer types\n",
|
|
"from mlp.optimisers import SGDOptimiser #import the optimiser\n",
|
|
"from mlp.dataset import MNISTDataProvider #import data provider\n",
|
|
"from mlp.costs import CECost #import the cost we want to use for optimisation\n",
|
|
"from mlp.schedulers import LearningRateFixed\n",
|
|
"\n",
|
|
"rng = numpy.random.RandomState([2015,10,10])\n",
|
|
"\n",
|
|
"#some hyper-parameters\n",
|
|
"nhid = 800\n",
|
|
"learning_rate = 0.5\n",
|
|
"max_epochs = 30\n",
|
|
"l1_weight = 0.0001\n",
|
|
"l2_weight = 0.0\n",
|
|
"cost = CECost()\n",
|
|
" \n",
|
|
"stats = []\n",
|
|
"layer = 1\n",
|
|
"for i in xrange(1, 2):\n",
|
|
"\n",
|
|
" train_dp.reset()\n",
|
|
" valid_dp.reset()\n",
|
|
" test_dp.reset()\n",
|
|
" \n",
|
|
" #define the model\n",
|
|
" model = MLP(cost=cost)\n",
|
|
" model.add_layer(Sigmoid(idim=784, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" for i in xrange(1, layer):\n",
|
|
" logger.info(\"Stacking hidden layer (%s)\" % str(i+1))\n",
|
|
" model.add_layer(Sigmoid(idim=nhid, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" model.add_layer(Softmax(idim=nhid, odim=10, rng=rng))\n",
|
|
"\n",
|
|
" # define the optimiser, here stochasitc gradient descent\n",
|
|
" # with fixed learning rate and max_epochs\n",
|
|
" lr_scheduler = LearningRateFixed(learning_rate=learning_rate, max_epochs=max_epochs)\n",
|
|
" optimiser = SGDOptimiser(lr_scheduler=lr_scheduler, \n",
|
|
" dp_scheduler=None,\n",
|
|
" l1_weight=l1_weight, \n",
|
|
" l2_weight=l2_weight)\n",
|
|
"\n",
|
|
" logger.info('Training started...')\n",
|
|
" tr_stats, valid_stats = optimiser.train(model, train_dp, valid_dp)\n",
|
|
"\n",
|
|
" logger.info('Testing the model on test set:')\n",
|
|
" tst_cost, tst_accuracy = optimiser.validate(model, test_dp)\n",
|
|
" logger.info('MNIST test set accuracy is %.2f %%, cost (%s) is %.3f'%(tst_accuracy*100., cost.get_name(), tst_cost))\n",
|
|
" \n",
|
|
" stats.append((tr_stats, valid_stats, (tst_cost, tst_accuracy)))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Exercise 2: Implement L2 based regularisation\n",
|
|
"\n",
|
|
"Implement L2 regularisation method. Follow similar steps as in Exercise 1. Start with $\\beta_{L2}$ set to 0.001 and do some grid search for better values. Plot validation accuracies as a function of epochs for each model."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"ERROR: Line magic function `%autoreload` not found.\n",
|
|
"INFO:root:Training started...\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Training cost (ce) for initial model is 2.666. Accuracy is 8.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Validation cost (ce) for initial model is 2.595. Accuracy is 9.84%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Training cost (ce) is 2.884. Accuracy is 59.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Validation cost (ce) is 0.622. Accuracy is 81.68%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Took 11 seconds. Training speed 215 pps. Validation speed 1504 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Training cost (ce) is 0.543. Accuracy is 84.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Validation cost (ce) is 0.523. Accuracy is 85.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Took 12 seconds. Training speed 205 pps. Validation speed 1508 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Training cost (ce) is 0.401. Accuracy is 88.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Validation cost (ce) is 0.524. Accuracy is 85.17%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Took 12 seconds. Training speed 200 pps. Validation speed 1460 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Training cost (ce) is 0.271. Accuracy is 93.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Validation cost (ce) is 0.484. Accuracy is 86.72%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Took 11 seconds. Training speed 206 pps. Validation speed 1518 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Training cost (ce) is 0.220. Accuracy is 95.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Validation cost (ce) is 0.452. Accuracy is 88.28%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Took 10 seconds. Training speed 245 pps. Validation speed 1590 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Training cost (ce) is 0.169. Accuracy is 97.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Validation cost (ce) is 0.427. Accuracy is 89.22%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Took 10 seconds. Training speed 277 pps. Validation speed 1575 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Training cost (ce) is 0.137. Accuracy is 98.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Validation cost (ce) is 0.460. Accuracy is 88.52%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Took 10 seconds. Training speed 280 pps. Validation speed 1534 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Training cost (ce) is 0.112. Accuracy is 98.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Validation cost (ce) is 0.433. Accuracy is 89.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Took 10 seconds. Training speed 269 pps. Validation speed 1482 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Training cost (ce) is 0.099. Accuracy is 98.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Validation cost (ce) is 0.436. Accuracy is 89.63%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Took 12 seconds. Training speed 214 pps. Validation speed 1335 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Training cost (ce) is 0.083. Accuracy is 99.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Validation cost (ce) is 0.441. Accuracy is 89.92%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Took 12 seconds. Training speed 200 pps. Validation speed 1425 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Training cost (ce) is 0.076. Accuracy is 99.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Validation cost (ce) is 0.443. Accuracy is 89.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Took 11 seconds. Training speed 246 pps. Validation speed 1445 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Training cost (ce) is 0.068. Accuracy is 99.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Validation cost (ce) is 0.444. Accuracy is 90.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Took 11 seconds. Training speed 220 pps. Validation speed 1469 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Training cost (ce) is 0.066. Accuracy is 99.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Validation cost (ce) is 0.459. Accuracy is 89.96%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Took 12 seconds. Training speed 201 pps. Validation speed 1518 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Training cost (ce) is 0.062. Accuracy is 99.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Validation cost (ce) is 0.462. Accuracy is 89.66%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Took 10 seconds. Training speed 295 pps. Validation speed 1532 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Training cost (ce) is 0.059. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Validation cost (ce) is 0.469. Accuracy is 89.78%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Took 11 seconds. Training speed 252 pps. Validation speed 1383 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Training cost (ce) is 0.057. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Validation cost (ce) is 0.466. Accuracy is 90.07%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Took 12 seconds. Training speed 206 pps. Validation speed 1445 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Training cost (ce) is 0.056. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Validation cost (ce) is 0.466. Accuracy is 90.02%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Took 11 seconds. Training speed 238 pps. Validation speed 1458 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Training cost (ce) is 0.055. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Validation cost (ce) is 0.474. Accuracy is 90.06%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Took 11 seconds. Training speed 236 pps. Validation speed 1399 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Training cost (ce) is 0.054. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Validation cost (ce) is 0.473. Accuracy is 90.06%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Took 11 seconds. Training speed 221 pps. Validation speed 1441 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Training cost (ce) is 0.054. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Validation cost (ce) is 0.480. Accuracy is 89.99%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Took 12 seconds. Training speed 203 pps. Validation speed 1401 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Training cost (ce) is 0.053. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Validation cost (ce) is 0.481. Accuracy is 90.08%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Took 10 seconds. Training speed 295 pps. Validation speed 1471 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Training cost (ce) is 0.052. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Validation cost (ce) is 0.487. Accuracy is 89.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Took 11 seconds. Training speed 238 pps. Validation speed 1466 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Training cost (ce) is 0.052. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Validation cost (ce) is 0.485. Accuracy is 90.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Took 12 seconds. Training speed 205 pps. Validation speed 1314 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Training cost (ce) is 0.052. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Validation cost (ce) is 0.488. Accuracy is 90.04%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Took 12 seconds. Training speed 203 pps. Validation speed 1493 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Training cost (ce) is 0.051. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Validation cost (ce) is 0.490. Accuracy is 90.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Took 11 seconds. Training speed 220 pps. Validation speed 1443 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Training cost (ce) is 0.051. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Validation cost (ce) is 0.492. Accuracy is 90.07%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Took 12 seconds. Training speed 211 pps. Validation speed 1387 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Training cost (ce) is 0.051. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Validation cost (ce) is 0.494. Accuracy is 89.98%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Took 12 seconds. Training speed 217 pps. Validation speed 1433 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Training cost (ce) is 0.050. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Validation cost (ce) is 0.494. Accuracy is 90.04%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Took 10 seconds. Training speed 296 pps. Validation speed 1475 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Training cost (ce) is 0.050. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Validation cost (ce) is 0.495. Accuracy is 90.19%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Took 12 seconds. Training speed 218 pps. Validation speed 1361 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Training cost (ce) is 0.050. Accuracy is 100.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Validation cost (ce) is 0.501. Accuracy is 89.98%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Took 12 seconds. Training speed 222 pps. Validation speed 1321 pps.\n",
|
|
"INFO:root:Testing the model on test set:\n",
|
|
"INFO:root:MNIST test set accuracy is 89.24 %, cost (ce) is 0.460\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"%autoreload\n",
|
|
"\n",
|
|
"import numpy\n",
|
|
"import logging\n",
|
|
"\n",
|
|
"from mlp.layers import MLP, Linear, Sigmoid, Softmax #import required layer types\n",
|
|
"from mlp.optimisers import SGDOptimiser #import the optimiser\n",
|
|
"from mlp.dataset import MNISTDataProvider #import data provider\n",
|
|
"from mlp.costs import CECost #import the cost we want to use for optimisation\n",
|
|
"from mlp.schedulers import LearningRateFixed\n",
|
|
"\n",
|
|
"logger = logging.getLogger()\n",
|
|
"logger.setLevel(logging.INFO)\n",
|
|
"rng = numpy.random.RandomState([2015,10,10])\n",
|
|
"\n",
|
|
"#some hyper-parameters\n",
|
|
"nhid = 800\n",
|
|
"learning_rate = 0.5\n",
|
|
"max_epochs = 30\n",
|
|
"l1_weight = 0\n",
|
|
"l2_weight = 0.00001\n",
|
|
"cost = CECost()\n",
|
|
" \n",
|
|
"stats = []\n",
|
|
"layer = 1\n",
|
|
"for i in xrange(1, 2):\n",
|
|
"\n",
|
|
" train_dp.reset()\n",
|
|
" valid_dp.reset()\n",
|
|
" test_dp.reset()\n",
|
|
" \n",
|
|
" #define the model\n",
|
|
" model = MLP(cost=cost)\n",
|
|
" model.add_layer(Sigmoid(idim=784, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" for i in xrange(1, layer):\n",
|
|
" logger.info(\"Stacking hidden layer (%s)\" % str(i+1))\n",
|
|
" model.add_layer(Sigmoid(idim=nhid, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" model.add_layer(Softmax(idim=nhid, odim=10, rng=rng))\n",
|
|
"\n",
|
|
" # define the optimiser, here stochasitc gradient descent\n",
|
|
" # with fixed learning rate and max_epochs\n",
|
|
" lr_scheduler = LearningRateFixed(learning_rate=learning_rate, max_epochs=max_epochs)\n",
|
|
" optimiser = SGDOptimiser(lr_scheduler=lr_scheduler, \n",
|
|
" dp_scheduler=None,\n",
|
|
" l1_weight=l1_weight, \n",
|
|
" l2_weight=l2_weight)\n",
|
|
"\n",
|
|
" logger.info('Training started...')\n",
|
|
" tr_stats, valid_stats = optimiser.train(model, train_dp, valid_dp)\n",
|
|
"\n",
|
|
" logger.info('Testing the model on test set:')\n",
|
|
" tst_cost, tst_accuracy = optimiser.validate(model, test_dp)\n",
|
|
" logger.info('MNIST test set accuracy is %.2f %%, cost (%s) is %.3f'%(tst_accuracy*100., cost.get_name(), tst_cost))\n",
|
|
" \n",
|
|
" stats.append((tr_stats, valid_stats, (tst_cost, tst_accuracy)))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Exercise 3:\n",
|
|
" \n",
|
|
"Droput applied to input features (turning on/off some random pixels) may be also viewed as a form of data augmentation -- as we effectively create images that differ in some way from training one but also model is tasked to properly classify imperfect data-points.\n",
|
|
"\n",
|
|
"Your task in this exercise is to pick a random digit from MNIST dataset (use MNISTDataProvider) and corrupt it pixel-wise with different levels of probabilities $p_{d} \\in \\{0.9, 0.7, 0.5, 0.2, 0.1\\}$ (reminder, dropout probability is $1-p_d$) that is, for each pixel $x_{i,j}$ in image $\\mathbf{X} \\in \\mathbb{R}^{W\\times H}$:\n",
|
|
"\n",
|
|
"$\\begin{align}\n",
|
|
"d_{i,j} & \\sim\\ \\mbox{Bernoulli}(p_{d}) \\\\\n",
|
|
"x_{i,j} &=\n",
|
|
"\\begin{cases}\n",
|
|
" 0 & \\quad \\text{if } d_{i,j} = 0\\\\\n",
|
|
" x_{i,j} & \\quad \\text{if } d_{i,j} = 1\\\\\n",
|
|
"\\end{cases}\n",
|
|
"\\end{align}\n",
|
|
"$\n",
|
|
"\n",
|
|
"Plot the solution as a 2x3 grid of images for each $p_d$ scenario, at position (0, 0) plot an original (uncorrupted) image.\n",
|
|
"\n",
|
|
"Tip: You may use numpy.random.binomial function to draw samples from Bernoulli distribution."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Using matplotlib backend: TkAgg\n",
|
|
"Populating the interactive namespace from numpy and matplotlib\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"<matplotlib.image.AxesImage at 0x7f0472955e10>"
|
|
]
|
|
},
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
},
|
|
{
|
|
"data": {
|
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAWwAAAD7CAYAAABOi672AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvVlsXVt63/lbZ54nTudwkkhRw9WV7q0BdW9VKh2X3V1A\nxX4M4CCNvMQxAiQx2kY/JHA/xHY/Bmi3O4Ef/GAHSMqdFNoBjAAu23GcOCk7VXWdKlXpStekJE46\nhzw88zwPux/EtbQPNVEcxHM21w/YIEWRR0v8n/Xfa3/rW98nDMNAo9FoNOOP7aIHoNFoNJrjoQ1b\no9FoJgRt2BqNRjMhaMPWaDSaCUEbtkaj0UwI2rA1Go1mQjixYQshviGE+FQI8ZkQ4p+e5aA0F4vW\n1ppoXScfcZI8bCGEG1gH/jqQAb4L/APDMO6d7fA07xqtrTXRulqDk66wPwYeGoaxZxhGH/gW8DNn\nNyzNBaK1tSZaVwvgOOHPLQJJ059TwNfM3yCE0EcoxwjDMMQxv1VrO0FoXa3Jq3Q96QpbC2tdtLbW\nROtqAU5q2ClgyfTnJUbv3prJRWtrTbSuFuCkhv2XwB0hxIIQwgn8LPCHZzcszQWitbUmWlcLcKIY\ntmEYbSHEPwT+mGem/28Mw/jhmY5McyFoba2J1tUanCit71gvrDcwxoq32Jx6I1rb8UHrak3OetNR\no9FoNO8YbdgajUYzIWjD1mg0mglBG7ZGo9FMCNqwNRqNZkLQhq3RaDQTgjZsjUajmRC0YWs0Gs2E\noA1bo9FoJoSTllcFQAixA1SBAdAzDOOjsxiU5mLRuloTrevkcyrD5lnJxq8ZhlE8i8FoxgatqzXR\nuk44ZxESObNaBpqxQutqTbSuE8xpDdsA/kQIcV8I8QtnMSDNWKB1tSZa1wnntCGRLxuGkRVCzAB/\nJIRYNwzjP53FwDQXitbVmmhdJ5xTrbANw8gefswBvwd86SwGpblYtK7WROs6+ZzYsIUQPiGE7/Bz\nP/AN4OFZDUxzMWhdrYnW1RqcJiQyB/z+YdFzH/DvDMP4D2czrPHDbrfjcDhwOBwjnzscDmw2Gzab\nDSGE+ijEs70dIQTmJhHdbpdOp0On06Hb7TIYDBgOhwwGA86rmcRbcql0PYrT6cTj8eDxeHC5XEpb\nqSug9DXr1e/3la6dTofBYKAureu7RwgxMkePzt/X6QoozbrdLs1mk2azSavVurD/j0R3nDkmXq8X\nv99PIBAgEAjg9/vV5fF4cDqdOJ3Ol05ywzDUVSwWyeVy5HI5SqUSrVaLdrtNq9ViOBye2/h1Z5Lj\nEYlEiMfjzM3NMT09PaLr0YkuNQWo1WpK11wuN6Jrv98/t/FqXV+O3W5X89Q8ZwOBAD6fT2nqdDqV\nrna7fcSw5XxNJpM8ffqUZPLd9Sx+la6n3XS8NDidTkKhEFNTUyNXLBYjGAzi9XrxeDx4vd6Ru7k0\n7OFwiGEY7O7usrW1xebmJkIIqtUqhmHQ6XTO1bA1x8Pv9zM/P8+NGzdYWVnB6/Wqy+l0qhWa3W5X\nk3o4HJLP59nc3OTJkycYhkG1WgWerdDO07A1L8dms+H1eonFYkxPT4/M2Wg0qjSVT1Jyzkpdh8Mh\nw+GQVCrFvXv3aDQa79SwX4U27GPicrkIBoPMzMywuLhIIpFgYWGB+fl5YrGYuosHAgG1KnM6ndhs\nNiX+cDjk4cOH+Hw+er0ejUZDmbW8s2suFmnYt2/f5sMPP1SaBoNB3G73iLZmXZPJJMFgkOFwOGLW\njUbjgv9HlxObzYbP5yMWi6m5KudrIpF4o64yVLm+vk6z2WR3d/ei/0vAJTRsuTqSd1S32z1yuVwu\n9VE+JtntdsLhMNPT08zMzLxwx5YrbK/Xi9vtHlmFybiYNO5gMMjs7CxXr16l1+vhcrnodruUSiV6\nvd5F/3omFiHEiLZSx5dd8nukJvl8nkKhQKFQwOFw4PV6CYVCRKNRfD6fWonJFbbct5D/rjSHqakp\nlpaWaDabeDweBoMB1WqVZrN5wb8d6+DxePD5fPh8PhwOh4ovN5tNgsEgU1NTTE9PEw6HgWf6FItF\nfD4fs7OzeDyekRW2nOvm2LZEzn9znPuiuZSGLWNXPp+PUCj00svv96vvczqd+P1+gsGguiubY2Jy\nMsvvN8fDzELbbDb8fj+zs7N0Oh0Mw6DX61EqlbDb7Rf4W5l85CaTy+VST0Mv0zUcDqsVlbxZbmxs\nsL6+TrVaxW63K8OOxWIjukqjNm9SSTweD1NTUywvLyOEUCvtdDp9Ub8SS+L1epUpezwe8vk8+Xye\ndrtNJBJhbW2NW7duMTc3x97eHnt7e6RSKbxeL91uF7fbTTQaVZqa9ybMG8lS26NJBBfNpTVsj8dD\nKBRiZmbmpVc0GlXZAjLOdXTyvm7X+WimiPzo9/uZmZlRq7xyuUwqldKGfUqkYbvdbrxeL5FIhNnZ\n2Rd0nZ2dVRvFHo+HVquFy+WiWq2ytbWFw+HA4/EQDAaJRCIjuppX1Uc/ejweYrEYAD6fj2q1yv7+\nPk6n82J+IRZF3hiXlpbw+/3YbDba7TbFYpFwOMza2hoff/wxV65c4ZNPPlFPTj6fb8SwXzZfJUez\nRsbFrMGihm1Or3M4HMpoXS4XPp9PZXfIST07O8vc3Jya4LOzs8RiMbWJ6PF4RnaQ5QpKbiQOBgP6\n/b6KfZkfzY+GRVwuF36/H8Mw1GOc2+0eeRTTvBzzBBJCjNxEj944zSGKxcXFEZ39fr/StdVqkc1m\nSafTJJNJ4vG4emSWIZN+v89gMHgh7GKe7A6HA5/Pp74vHA6rDWjN63mTruaU2YWFBa5evcrVq1fx\ner10Oh1KpZLSXaZlynkeDAbVHBsMBiqbR/qCOXz5Ml3l03UsFmNubm4kLXcwGLzz35XlDPto/qWM\na8lLmrUMbUQiEcLhsPooH5t9Pt/IGwae52YOBgN6vR7dblcJ2G63abfbdDodFWMzxz6dTudIxki/\n31dGYE4P07was7ZOp5NYLKZ0dTqdlEoldQGEQiEWFhZYXV0lHA4TDodVCEs+CrtcLhKJBHfu3EEI\nQSgUYnFxEbvdTqlUUrq2Wi1lylJb80aV1FW+N+QNXOv6Zl6naygUGnmynZqaUjdfmXaXSqWw2WyU\ny2WePHmCy+UilUpRrVaJxWJ89atfVXtSGxsbpNNpNddDoZBK9ZOXNHKn04nX6yWRSHDr1i3a7TaZ\nTIZsNksmk7mQDWXLGTaMhj1isRhXrlxRd2VzNod5pSU/mkMg5jsuMLKi7nQ6arOj0WhQrVapVqvU\n63UikQjRaJRoNMpwOMTj8ag3pXlF3uv11G605s3I36HH48HtdqvN25WVFVwuF0+ePKHb7ZJOp5X5\nSsOWP+PxeEZWVdKwhRBMT09js9kIBoM4HA6KxSLVapVarUa1WsXlcildI5EIHo8HYCQVzHwj1roe\nj5fpurKywtWrV4nH4yOplea86kajQSqVIhgMYrPZqFQqPHnyhGq1SjweJ5FIMD8/z507d2g0GqTT\naTY2NiiVSiQSCfU9MtUvGo1iGIbyALmfEY/HuXXrFi6Xi0ePHiGEoFKpjKdhCyF+B/gZIGsYxt3D\nr8WAb/Hs9FQa+NuGYZTPc6DH5WgsMxaLcfXqVe7evcvdu3fVyjoYDOLxeF4be5avB6jJJ1dR7XZb\nGXWpVFKxsmKxSDwep9vtvhCakT//son9rldik6YroB553W43Pp+Pubk5rl+/zt27d3G73fR6PdLp\nNO12GxhdYQMv1dZut5NIJJiZmeG9996j2WxSr9ep1+sUi0Wlaz6fx+v10m63MQxD6Wm323G73WOz\nwraCrvF4nLW1NT744ANWVlbUfA0EAiMni3O5HI8fPyYQCKj9ILkXEYvF+Kmf+inu3LnDX/trf429\nvT0ODg7Y2NjgRz/6Eaurq1y7do3V1VUWFxeVrnLPQd7M5Qrb5XIxMzODzWajWq3y9OnTC/ldHWeF\n/a+Afwn8a9PXfg34A8MwfkMI8UuHf/7FcxjfsZAC2u12nE6nikfPzc2xvLzM2toai4uLxGIx9Tgr\nH2nNE1geJZZmKo+Py5iVfDxut9tqUsvLvMIWQqgY6tENi263S61Wo1AokM1mKZfL6s3yjhl7XWE0\ntSoQCDA3N6f0lRMukUjgdDpZW1uj0+ngcrm4ceMGy8vLBIPBkaeawWAwoulRbRuNxgu6yhV2JBLB\n7XYTiURe2HgcDAY0Gg2KxSIHBweUSiWazeZFxDknQlczclWdSCTUitcwDJLJJJ1OR4WoYrEYjUZD\nhb2ePn3KxsYGuVyOfr+vdB4MBjSbTTKZDJubm0xNTZHNZnn69KkKc1UqFbLZrMrqkpuRRxkMBrRa\nLWq1GsVikVqtRrvdvrCnpzcatmEY3xFCXD3y5Z8GZHuhbwLf44INW8YSfT4fiUSCtbU1ZdTyqHEo\nFFKbj3Iz0Hxs3ByPbrVayoTlVavV1NVqtVRIxDzxu92uygSRG1DmlZ007Fwux8HBAeVymWaz+c7f\nAJOgK6Buwk6nk0gkwtLSEtevX2dtbY1EIqHM2+FwsLa2hs/nY35+nrm5Oa5evYrf71c3X7nHIA34\nqLYyZ1rWjWg2myPmLjck5UEn881YHoQqFArs7+9TLBap1+vv/JTjpOhqxu12Mzc3x61bt7h9+zbN\nZpNKpcLjx4+VaUtdy+Uyjx8/5tGjRzx+/Jjt7W3S6fQLZxj6/T7ZbJb19XX6/T6VSoXt7W0qlQrD\n4ZBms0mhUMAwDBXq6nQ6L+RcH9W1UCjQaDQu7PTqSWPYM4ZhFAAMw8gLIWbPcExvjfmRKhAIkEgk\nuHnzJl/4whdYWFgYqfthXrEdjn8kLt1oNFSoI5fLkc1myWaz5PN5isUi+XyeUqn0QqEfM9PT0ywt\nLdHv9194A3Q6HarVKtlsVq3EzruOyFswVrrC6H5EJBJheXmZ999/ny984QuEw2Glq91ux+/3s7Cw\nwPvvv4/b7cbv96tTpa1WS2krn26y2Sy5XE6FPORkNN98zQyHQ+bn5+l2uy8UDur3+9Tr9bGZ2EcY\nO13NeDwe4vE47733Hl/5ylfY2Nggn8/z6NEjut0uXq+XhYUFDMNQhv3f//t/58GDB2oBdfT33Ov1\nyGQydLtdMpkMnU6HYrFIpVJRGVryo9frZX5+fuRGfFTXfD7P3t4ehULhQm7EEktsOspfsFyNhcNh\n4vE4KysrzM/Pj5xslDFjWeNBZnrIla+8SqUSmUyGg4MDDg4OVFEfady9Xk9dMl1PXu12m8Fg8MLN\nYTgcqscxucKWj85jYthjh9wHsNvt6pTawsICa2tramNI1mzxer1Eo9GRLB65WjavouXv/uDggEwm\no3TN5XI0m02l62AwGNFV/h2gHqXlU5p8/5gn9hgZ9lgiTVE+Pc3Pz7O2tkalUsHv9zMcDqnX65TL\nZaVZMplka2uLx48f8/jxY7VPcHT+DAYDKpUKzWaTbDbLcDhUusoDazI9t9FoKF0dDge9Xo9arUa9\nXieXy7G/v6+ui9b1pIadE0JMH96tZ4DsWQ7qbTFv9nQ6nRc2fWR82DAMFe6Q2R2VSuWlj8bVapVy\nuUypVFKbGdVqVYlrfn1pJnIHOxKJqHQhr9erNqjk45W8GaTT6XFbYY+VroDaoJWhCWmk5lCWfEKS\nukqDPqqt1LdcLr+grYxNmlMtZU6w1DUajb5U136/T6vVolwuk8lkSCaTlEqll678Loix09W8kjU/\nqRiGQSwW48aNG6psQDgcJp1O86d/+qc8ffqUvb09BoMBgUBAmXC3231hH0j6gnzd4+jq8XjUk5d8\nCpZmnU6nLyzUJTmpYX8b+LvAbxx+/PaZjegEGIZBv99HCEG3233lpJaGXalUKJfLFAqFkbzKSqWi\nDLter4+UyJTxz3a7TbfbHckAsNlsI6fjwuEwwWBQnaiT/3av16PZbI4YtoyZjolhj5Wu8NywpbYv\ny12Xk7HRaCgjzuVyZDIZMpkM+Xz+hTh1q9VSukptO53OyI3YPLFlqqbM5fb5fCP/drPZpFwuk81m\nSaVS6uYxJoY9drqazfpo2FAadiwWGzHNe/fuUS6XqVQqDAYDgsEgrVZLzS2zYZvTLM2r8Dfp6vF4\nqFQqbGxs8ODBA/b29tT7plarqYXe2Bq2EOLfAj8BTAshksA/A34F+JYQ4ueAA+Bnz3WUb8B8Jz3a\nGMA8qWVlPLlDvL+/z+7uLk+fPmV3d3dkJS1XvWahzZd8TUCtsIPBILFYbGSF7fF41Cqg3+8rU5Fv\nRLmr/a4NexJ0heeGLcMOr7oZ9/t9ms0mxWKRTCajNN3d3WV/f3/kyeno05e5TKpEpu6ZJ7ZZVxkb\nl1er1VLvq1QqNfK+eZdMiq7ASEqtObwUi8WIRqPcuHGDdDrNn/3Zn3Hv3j3+63/9rwyHQ5XpFQgE\nlFnLVE4z5nMTkjfp6vF4KJfLPHr0iO985zs8ffp0ZM6bG1NcBMfJEvk7r/irr5/xWE6MebL1ej0V\ne0omky9sHMlVl7zS6TTpdJpsNjtyB+10Oq/9N+XqAFAbYolEgqWlJeLxOOFwGJfLpWJkMjYu49aN\nRkM9xl3EScdJ0BVGtZU320wmw87OzkidjlarNaKrXJUdHByQz+eVro1G47U3R7nSs9ls6qTs7Ows\ny8vLLC4uMjU1hc/nQwhBu91WK690Oq02pC6y6uKk6ArPF1Htdpt0Os36+ro6bSir6Mk0yUKhoMrW\nyp+T8ebXdfV52dfNNX2O6ir3o+QNQdYgkdfRMOu7xhKbjjD6CFSpVEin02xublIsFtVjdL/fV7Ep\nmR0gwyOVSkU9Gh9ntWt+pJObYfPz81y7do1EIqEMu9/vU6vVlJHs7++rXNCLMutJwmzYsshPKpVS\nJ85kznyz2RzRVsanK5UK9Xpdxb+P87s2H3YKhULqROXS0hJTU1N4vV4Mw6DRaJDP58lkMqRSKQqF\nwli0kZoEzLrK5gDy8Iu5o5OsRV0qlZRZyrljt9vfSleJzOl/ma7mMxSLi4v0ej215zEOc9YShm0O\nUfT7farVKgcHByqty5yDa659XK1WVQhFxkfN4ZXXYc5ekCcqZfbCzMwMkUgEp9NJr9ejWq2SyWTY\n3t5WObrm5Htt2K9HTm5Z6CeVSuHxeOj3+yO51eb0vFar9cJG5XF1NWcchUIhlXF01LCbzSb5fF61\nj9KG/XZIXaVhl8tlNjc3Vd2XSCSimkOUSqWRMKfc1zhJeEIIMWLYy8vLTE9Pv2DYS0tLKoWz3W6r\nGjUXiSUMG56bnnlFKyed3GCSG0PypNRJJ5d5QjudTgKBALFYjHg8ztLSkloh2O12tckpH+PT6TTl\nclnVw9a8HvPNuNvtUi6X2d/fRwhBp9NRutbrdaVruVw+cYzRfAjL3EZK1rmQKYSDwUDl56ZSKVKp\nFMViURv2MTmaudVut8nlcgAqrhyNRnE4HJRKJdVKT5Z2OE3YydyNZmFhgenpaZWOKxMD4HkfV3nQ\nbhzmq2UMW2IYBq1Wi1KppB5rzUfMG42GypM+KUI8q38si9DIhq0yPUgKLLMHpGEnk0my2SzVavWF\n2LrmzZgPp8jNJvPpVPO+wElxOBxK11AoxGAwUE0I8vk8KysrDIdDnE4n1WpV5V3LFM2XbX5p3g5p\nmvIJ9ix0PYosNhUIBBgOh+rJt1gsqn2tdDpNJpM51eLurLG0YbfbbWw220gdCWnep8nKkIYdDoeZ\nmppShi13m+X3yNzgUqlENpslmUyqPE5t2G+PTN2T4Qhz7RdzudvTTGz5xDQ1NUUkEmEwGChDzufz\nqs1bLBZTJQZSqRTpdFqVLNCcDmnYsrTDWehqRta5PmrYn332GRsbGyN5+7KmzLjoajnDlnUCpFmb\n64XIvz/tpoHZsGdmZl5YYcv0M/nGMx+oMB+80bwdcoUtV19HUzbPIpXOHOKSecDyKhaLBINBFhcX\nWV5eVitseahC3jg0p0POD/m0ctYpkvLJW2aCyDDbD3/4Q/7iL/5iJHXP3JB3HLCcYcPzEqZniTlf\nVGaFJBIJVldXuXLlCjMzMwQCAWw2mzraKjc/ZTaKLBR1HuO7DMic2rPMgTUf3rDZbCorZHl5mUQi\noWKpsqi+1+ul2Wyyv7+vslHq9brKINC6np6X5cW/LUd1BUaM15w0IPejZAkCuVEtz3OME5Y07PPA\nvBkVCASYmZlRleOuXLmiegXKx3VZYCiZTJLP52k0GhdW+1rzemR3EafTOXIjXllZUTUlarWaSvPr\ndrvs7OyQyWTUfsRF5+dqXsSsq9yoNIcipV4ul0vVH7py5YpK47vIAzKvQhv2MZGGLR+jpqenWVxc\n5Pr16ywsLKgke7Nhp1IpkskkuVyOer0+cofXE3s8kBk/shuNPAC1srLC7du3VUqgDMdUKhUqlYoq\nHlWpVEb2RLSu48FRXaXxvsyAzembV69eZX9/X2UBvekA3bvmjZ1fhRC/I4TICCE+NX3tV4UQKSHE\nvcPrG+c7zIvHvKscjUaZnZ1lYWGBK1eusLCwQDQaVY0+5YEKadj5fF4Z9riswrSuz5B1JXw+H5FI\nhOnpaZWeubKywq1bt7h79y5f/OIXuXHjBqFQiEaj8cIKW+s6XhzVVYa2ZGKA0+lUKcDmNL5YLDbS\njWrcOGnHGQP4dcMwfv1cRjWGuN1ulRGysLDA/Py8MmmZEdLr9RgOh6q4UzKZVCfgZP3dMULryrM6\nMMFgkHg8rh6J5X6E3W5Xj9LD4VCVZt3b22NnZ4dcLjdOFfkkWlee6RoKhZibmyMej6u+qnLDcXZ2\nlnK5zHe/+111NF5uHo+prsDJO84AiJd8zbLIhr5LS0tcu3ZNrarlndhcAtRs2MlkUmU2jJNha12f\nISd2PB7n2rVryrDlwSdzbW1Zx3xvb4/d3V0V3x6nia11fYb5Rnzt2jXVHlAehJGnnTc3N0eKvsna\nMOOmq+Q0Mex/LIT4eeAHwP9mGEbxjMY0lhw17JetsGUn9aMr7IuqyHdCLpWuskt6PB5/IeNHnm6T\n1fjMB2V2dnYuvHLbW3JpdZULrOnpaaanp3G73fzFX/wFm5ubfO9731NVM+VZjXHW9aRBmt8ErgG3\ngU3gX5zZiMYEmVzv8/kIhULEYjFmZmZIJBJKfNnFWcatC4UCe3t7ZLNZVcBe1rQYR/FfguV1hefl\ncGWzCXOoa3Z2lnA4jNvtBp4VnCqXy2qT8WiKpiz9OuZcSl2np6eVrtFoFJvNpipmFgoFarWaOkhn\nbmwyzrqeaIVtGEZefi6E+C3gv5zZiMYEm82G1+tV5R5lF3Z5ya4jsl5IuVwmnU6rU28ye2CSuAy6\nwrOsAFkYTE5qeU1NTREMBlVpXFmqV9YLKRQKE3f8/LLqOjs7qxpwG4ahmpXIWvj1ep1IJIJhGCp1\ncxzDIGZOZNhCiFnDMGSbob8FPDy7IY0HNpsNt9tNKBRSWSFyUs/OzqpEe7vdruKb6XSara0tle41\naafeLoOu8HxiRyKREV3j8TixWAyXy4XT6VQlVHO5HLu7uxNb4Oky6yo/5vN5stks9+7dY319XR2a\niUQiOBwObDYb3W6XRqNx0f+N13KSjjO/AvykEOIDwAXsAn//XEf5DpEF7B0OB36/n2g0SjweJ5FI\nMDc3x8zMDFNTUyPHoWXpxXQ6zfb29kQY9mXVVQihbsQzMzPMz8+riT09PU0wGASe16Sp1Wpks1me\nPn06ETWvL5uuZlwuF8FgUIUuZX0fv9+vmi1vbGzwwx/+cMTM7XY7zWZT9egcZ07aceZ3zmEsF47s\nwG3eYV5ZWWFlZYXV1VXi8bg6zSgzQrrdrrp7y9SgYrE49h2zL5OuMpVL6iuL06+urrK6usry8rIq\n5WkuEFatVlW/T3kUvVarjXWo6zLpehSfz8fc3Bxra2tcuXIFp9OpamnLbkDxeJzPfe5z6gYu5+q4\n6yoZ/1vKO8Rut6t6AubC9e+//z6Li4sq3QtQj0+yYpu5WahsrDvOhn2ZkIbtdrtVPv3S0hI3btzg\nxo0bzM7OKsOWHdBlGVepq8zPlZtSmvHDbNgrKyuqv2Y2m6Xf72O329XK2twZXTZmngRdtWGbkIbt\n8/lUbYFr165x584dZmZmVDF747CZryyaLzt0S8OesDQ+y2M2bHM3kZs3b3L79m3VqMDhcKibbblc\nVk9OZsPWuo4vPp+PeDzO9evXWV1dpVQqkUwm+d73voff7+fWrVvcuHGDRCLBgwcPKJVKqgb2pOiq\nDduEjG3KThSJRILZ2VlmZmYIh8OqXKtsBiuzQra2tlQoRB5z1YwPdrtd7UdMTU0xPz9PPB5ndnaW\nqakpVdFNpmfKwzGy9ZcMhUxadojVkWl88oa7uLio5mw0GlWxadkuUJYdCIfD6sYsSzFPCtqwTXi9\nXlWF7+rVq+pwjMvlUodj5CZjNptla2uL9fV1dnZ2SKVSqquzZryQpxnn5+dZWlp64TSjbN4sywok\nk0k2NjZ4/Pix6tU4CfHNy4bcj5BZPu+99x4LCwuEQiHsdrtaYAE0m03S6TQOh0P14czn82OdGPAy\ntGGbkPmbV65cYW1tjfn5eSKRiDJsObGbzaYy7E8//ZRkMkmlUtGGPabIsqiJREJtSEnDttlsSlfZ\nITuVSrG+vs6nn36qqvNN2sS+DDidTqamplhZWeH69evqRGMwGFR1rmX9ELnSbrVapFIpdRx90m7E\nl96wzele0rCXl5dZXV1VoRCXy6WK57fbbdXkd3t7m4cPH5JOp3Wd6zFEaut0OgmHwyQSCa5du8bi\n4iLT09NqhS1bULVaLVUW99GjRzx48EA1m9Dajg9mXaPRKIuLi7z33nssLi4yOztLIBDAMIwXVtiy\n0a9cfE2irpfasGUMTGYPyFNRMrYZCASUWdfr9ZFTUltbW+TzedVBZlzKa2qe4XK5lLYzMzMqZj07\nO0skEsHr9WKz2eh0OmpzMZvNsr6+zt7eHrVabWRCa23HA7OuoVCIfr/PwcEB9+/fVwWbfD4fXq/3\nBUOWXWxk20D5tUni0hu21+slFAoRDoeVWcvDMW63W514q9VqpNNpNjc32dzcZGtri1wuR6fT0WY9\nhrhcLgK7UlrfAAAgAElEQVSBgMr2kWYtn5pklcVut0uhUGBnZ0dpu7+/rwwbJm9SWxmzrn6/n8Fg\nwMHBAcVikW63q1L73G73yELqqHGbP04Sl96wfT4f0WhUrcLMK2wZ/5Ir7P39fTY2Nnjw4AH5fF4b\n9hgjT71NTU2pWtfypKrf71fayjDIzs4O9+/fH8kKmeSJbVXMuvp8PqrVqqrdA6g87Gg0+tIVtplJ\n1PW11fqEEEtCiP8mhPhUCLEhhPgnh1+PCSH+RAhxXwjxx0KIyLsZ7tkij5/HYjGVDiQ7TpirtVUq\nFfL5PJlMhr29vZG2X+Na1et1WF1XQB1+kjfi6elptSozV1gsFosqzPX06VMODg4ol8uqqa7Wdrww\nr5hlAkC5XFaHYPL5PIVCgXK5TKPRUN2AXvbzk8ibyqt2gX9kGMZd4IvAzwshPgR+DfgDwzA+AP7w\n8M8Th8zPjcVizM/Pq1oSMm7dbDYpFoukUin29vbIZDIUi8WRsqmTkGz/EiytK7xo2NFoFJ/Pp7ra\nVyoVMpkMqVSKg4MD8vk8lUpF9fGbkHK4L8PS2vZ6PWq1mtp3qFQqtNtthsOhOh8hF1aFQkE1v7YK\nrw2JGIaRATKHn9eFEPeBBeCngY8Ov+2bwPeAXzzHcZ4LRw17ZmZmpLSmrHEtTzBms1kKhQLVapV+\nvz+Rq2uwvq7w/BDU7OwsiURCGbbdblfNCGSTiXQ6TT6fp1wuq6emSS0rYHVtu90u9XqdbreL3W6n\n0+mosKSsmnlwcIAQgnw+T6PRmNRF1Us5dgz7sO3Ql4CfA2YMwyjAs1q7QojZcxndOSNDIvL0m3mF\nPRwOR7qfm1fYVsq3tqKu8HyFfdSw5UajnNjJZFKtsOVjtFWworbdbveVOfFS10wmw2AwuHwrbIkQ\nIgD8HvCLhmFUZW7jJCJT+GQanzkrJBgMqm7KnU5HdZDZ2tpStZAn6Rjrm7CSrna7/aXaTk9PE41G\n8Xq9I2UFMpkMT58+ZXt7m0wmQ61Ws9TEtoq2R3UNBAIEg0GCwaBK3ZN1QDweD36/n0qlos5KWE3X\n49TDdgL/HvhdwzB+//DLOSHE9OGdegbIvvoVxgv5qCyr8cnMgenpaXw+H06nc2RDan9/n+3tbfb2\n9ixl2FbTVXYIktqaUzRlJT6zYcsa11tbW6pdlFUena2k7VFd4/E4CwsL6hSyDF/1+32q1SqlUkkV\ndbKarvAGwxbPbsu/DXxmGMb/bfqrbwN/F/iNw4/fPrcRnjHm4vXmFfb09DR2u1014pTx6/39fTY3\nN8nlcqrv26RjRV1flqJpXmFLXTudjopfyxV2q9Wi1WpZYiVmNW2P6rq2tsatW7e4efMmiURipC79\nkydPuH//Pjs7O5bTVfKmFfZXeSbufSHEvcOv/TLPulh8Swjxc8AB8LPnN8Szxe12Ew6HR9K9ZFcK\nWVNC1rqWd2yZJjSunZRPgOV0lRNbpmiGQiEA1TFGIoSgVqtRqVTUKsxi5XAtpe1RXWVhNlk3RJqy\nzKeXJXItqCvw5iyRP+fVqX9fP/vhnC9CCDwejzrVKNO95DFlWQCo3W6r2gPdbldVcpvEjJCXYTVd\n4cWMH4fDoU7Aeb1eotGoulqtFp1Oh16vpya01nY8kbpOTU2xsLDAzMwMoVBInZOQT03NZtPSukou\n3UlHj8dDJBJRp9/M2QPD4ZBer0er1aLRaIwYtiytarU3gFWQKzGZ8VOv1zk4OCCbzdLr9VQ7MJvN\nNjKxZWqm1nY8OWrYs7OzhEIhXC4XgHoiPmrYVtX1Uhq2eYVtLgRkNuyjK2yLhEIsy9EVtuxef+/e\nPZU7b7PZmJ6eVu2g5EpMM74c1fWoYb9uhW1FLG/Ysp29bMAaCoWIRCJEo1EVu5bim7ufp1IplXhv\nVfEnHbOusiCQ7Cojb7wOh4NCoUAikcBms5FOp9Xx80no4XcZMevqdDqp1+ukUincbjepVIpwOKzC\nIuYYdjqdtryul8KwZWNdj8ejKvPFYjG1upbt7VutljLsZDJJPp+nXq9batPCSphzdIPBoLoZx2Ix\nXC4Xbreb2dlZisWiSv3a399XxYKsPLEnGbOuLpeLer3O7u4u1WoVn8+n2oK53W7V3s1ut2vDtgI2\nmw2n04nX68Xv97+wwnY4HNjtdgzDUIa9v7+vDNtqR1uthLmxbjAYJBwOK22npqaYnZ3l+vXrlMtl\ntre32d7eVicbrT6xJxmzrk6nU2X1bG9vq9W3NHV5iCYUClEqlSiXyxPXReZtsLxhm2tey8kcDocJ\nBAJ4vV61KdHr9Wg0GpRKJdUlWzbV1YY9vgghsNlsOBwOXC4XXq+XQCCgutvLPo2yf5/ciKxWq7rt\n15jidDrx+XxEIhHcbjftdpt2u63i01JX2WS3Xq9TrVZVKq427AnG6XQSCASYmZlhbm5upF4IPN9l\nlhXczKekZOU2bdjjidxwEkLQaDRUlT3ZAkrqWq1WKZfLStdKpUKz2dSGPaa43W4ikQhzc3PqlKrT\n6cTpdNLr9ej1eurATKPRoNlsUqlU1OdW1tXyhu1wOAgGg6obuszjlIYtN6dardbIpC4Wi+oElZXS\ngqzEYDBQJW5fZdjNZnPEsGWRp06nM7EV+ayOPNwWj8eZn58fyaE3p/CVSiW2t7fZ2dlhf39fLbCs\nrOubjqYvAb8LRAEX8NuGYfxzIcSvAj8P5A6/9ZcNw/ij8xzoSXE6nQSDQaanp5Vhm1fY8qCMjJOV\ny2Vl2FbM4wRr6AqMHDc/jmHL0Ii5m4zVsIK28qzE3NwcKysrLC0tsby8zNLSEu12m2q1Sq1WY29v\nj36/rzaRrayr5E0rbFkM/cFh9a8fCiH+GDCAXzcM49fPfYSnRMY4zZsVspuyEGIk9unxeHC5XDid\nTux2+8R2Vj4GE68rgNfrxefzqT5+g8GA3d1dvvvd76p0Tamt7M8pN5mlthZk4rU1p9f6fD7VbNfj\n8agOM8lkkp2dHdLpNPV6HSGE1XUFTt7AAGAy6zWakCl/spNyIBBQHZfdbvdIkwIrmbZVdPX7/ap4\nVywWo9/vs7m5SSaTYWFhgStXrrC8vEwgEFDaejwe2u220tZqOfZW0LbRaJDJZFRJiHA4zOLiIsPh\nkGq1yu7uLg8ePODRo0dkMhkqlQp2ux2Xy2VZXSVvahGmMBVD/87hl/6xEOKvhBDfFELEzmFs544Q\nQqX8BYPBkUktV2R2u51JrSV8HCZZV2nYq6urLC0tMRgM2Nzc5D//5//MD37wA1KpFP1+n2AwiN/v\nH3mKcjgc2GzHfvtPJJOqbaPRIJvNsr29zePHj0mn06pMaq1W4+nTp9y/f5//8T/+B9vb21QqFfWk\nbHVdj/U/O3y0+v94Vgy9BvwmcA24DWwC/+LcRnhKZArQYDBQd15ZF0QiwyPysUqGTWToxKpMsq7w\nrKtMIBBgamqKWCyG0+mk2WxycHCguo30+/2XamtlXWGyte31eirzQ3YBMrcBq9frFAoF8vm8SuOT\noU+r6/o2DQz+X1kM3TCMvOnvfwv4L+c2wlMiN59arRa1Wm2kjoQ5Q6TValEsFlV6kNxtPmruVmHS\ndQVUZ6BGo4HH42Fqagqv18vVq1fVBnO/31eZIebmyZPaj/M4TLq2Ho8Hn8+H3+8nkUgQiURUXr3X\n6yUej3Pz5k214pabkFbXFU7YwEAIMWsYhiwy/LeAh+c3xNNhzhYwG7YspSqT7avVqurXKL9Hrsyt\nhhV0hdHCP+FwmKmpKa5evUowGFQT3tyJpFarqYJeVt2csoK2Mq1vampKFWhzu90YhoHP5yMej3Pj\nxg1sNhvJZFIdjrKyrpKTNDD4P4D/VQjxAc/ShnaBv39+Qzwdr1phS8OWu875fF4dqpArbIkF79gT\nryswktLX7/eZmpri+vXrrK2t0W63KZfLVCoVcrkcpVKJer2uVthgSV3BAtqaD84cXWH7fD4SiYSq\nbW8YBqVSSR2AA8vqCpy8gcEfns9wzp5+v6/6M8r6A7Ijus/no1wuqxzdnZ0dcrkcrVbL0qJbQVeA\nTqdDrVYjl8upk3A2m43BYKAOVpTLZQ4ODtjb26NcLlv+kdkK2vb7fVqtFtVqlVwux+7urjrNKg+4\nlUolMpkMxWJR3bCtrKvE8icde72eytPM5/O0223y+Tw7Ozu43W6azaY60prNZslkMjSbzYsetuYY\ndDodyuUyw+GQXC6nmhY8efJEtXmTm1eZTIZSqWTJEJfVeJOucs5Wq1UODg4ula6XwrBrtZraaOx2\nu+zu7uL1erHb7ao7hblxgRUa7V4G2u02w+FQhTlarZZK3ZPNKMwdSXRt88ngVbp6vV6V7SXncrPZ\npNlsXhpdxXk9RgghrP98MkEYhnFm+U5a2/FB62pNXqWrdTPMNRqNxmJow9ZoNJoJ4dxCIhqNRqM5\nW/QKW6PRaCYEbdgajUYzIWjD1mg0mgnhXA1bCPENIcSnQojPhBD/9ISvsSOEuC+EuCeE+OQtfu53\nhBAZIcSnpq/FhBB/cvh6fyyEiJzwdX5VCJE6HNM9IcQ33vAaS0KI/3b4u9gQQvyTk4znNa/zVuM5\nLWeh6+HrvLW2WtfzQ+uqvn98dTW3wTrLC3AD2zwrnu4A/hL4/AleZxuIneDn/ifg88Cnpq/9S+CX\nDj//JeD/OeHr/Arwv7/FWOaAO4efB4BHwIdvO57XvM5bjWccdD2ptlpXretl1vU8V9gfAw8Nw9gz\nDKMPfAv4mRO+1lsfDjAM4ztA6ciXfxr4N4eff/M443nF67zVmAzDyBiG8eDw8zogu4C81Xhe8zpv\nNZ5Tcpa6wluOW+t6bmhdn7/G2Op6noa9CCRNf04dfu1tMQD5GPILpxzTjGEYBVD1gWdP8Von6t4h\nnncB+fPTjEdcXDeRs9IVzk5brevp0bq+hHHT9TwN+6wSvL9sGMYXgP8Z+HtCiP/ljF73NJyoe4d4\n1gXk93jWBaR60n9cXGw3kbNM3B83bbWuZ4PW9cXXORNdz9OwU8CS6c9LjN7Bj4VxWHTdMIwcz355\nXzrFmHJCiGkAIcQMkH3D979qTHnjEOC3jjMm8bwLyO8ah11ATjIe8YpuIm87nlNwJrrCmWqrdT09\nWlcT46rreRr2XwJ3hBALh4P+Wd6yJq8QwieE8B1+7ge+wek6ZXybZ8XdOfz47ZO8iBDC/Cj0xu4d\nQry8C8jbjudVr/O24zklp9YVzlxbrevp0bo+//7x1fUkO5XHvYC/CTwAPgN++QQ/vwL8GPgRz3ZY\n/8+3+Nl/C+wDXZ6tFP4eEAP+hGfB//8IRE7wOj/Hs42HHwN/BfwRsPCG1/jrwPDw/3Hv8PrG247n\nFa/zN992PBet62m01bpqXS+zrrqWiEaj0UwI+qSjRqPRTAjasDUajWZC0Iat0Wg0E4I2bI1Go5kQ\ntGFrNBrNhKANW6PRaCYEbdgajUYzIZzYsMUZ1c7VjB9aW2uidZ18TnRwRgjhBtZ5dpInA3wX+AeG\nYdw72+Fp3jVaW2uidbUGjhP+nKqdCyCEkLVzlfhCCH2EcowwDOO49Xe1thOE1tWavErXk4ZEzrJ2\nrma80NpaE62rBTipYes7sXXR2loTrasFOKlhn1ntXM3YobW1JlpXC3BSwz6T2rmasURra020rhbg\nRJuOhmG0hRD/EPhjnpn+vzEM44dnOjLNhaC1tSZaV2twbvWw9Y7zePEW2QRvRGs7PmhdrclZZ4lo\nNBqN5h2jDVuj0WgmBG3YGo1GMyFow9ZoNJoJQRu2RqPRTAjasDUajWZC0Iat0Wg0E4I2bI1Go5kQ\ntGFrNBrNhHDSetgACCF2gCowAHqGYXx0FoPSXCxaV2uidZ18TmXYPCvZ+DXDMIpnMRjN2KB1tSZa\n1wnnLEIiZ1bLQDNWaF2tidZ1gjmtYRvAnwgh7gshfuEsBqQZC7Su1kTrOuGcNiTyZcMwskKIGeCP\nhBDrhmH8p7MYmOZC0bpaE63rhHOqFbZhGNnDjzng94AvncWgNBeL1tWaaF0nnxOvsIUQPgDDMJpC\nCD/wDeD/OquBjRs2m+2V13A4pN/vq+vozzmdTlwuF06nk36/T6/Xo9frvfC948Bl0/WyoHW1BqcJ\nicwBv39Y9NwH/DvDMP7D2Qxr/HA6nfh8Pnw+H16vd+Rju92mUCioy4zX6yWRSBCPx0kkEhQKBQ4O\nDkin05RKpQv637yWS6XrJULragFObNiGYWwDH57hWMYal8tFMBgkFosRjUZHrlqtxubmJv1+/6WG\nvbi4yJ07d7hz5w7b29s8ePCARqMxloZ92XS9LGhdrcFpNx0vDS6Xi1AoxMzMzMiKOR6PUygU6PV6\n5PP5F37O4/GwuLjIhx9+yNe+9jV+9KMf0Wg02N3dvYD/hUajmWQutWHb7Xbcbre6nE6nugDq9bq6\nPB4PsViM5eVlrly5gtvtxmazUalUaDab+Hw+rly5ghCCarVKrVajWq1it9vxer2Ew2FmZmaYn5/n\n6tWr3Lhxg263S71ep9FoUK/XGQwGF/wbuXw4HA5CoZC6ut0u1WqVarVKvV6/6OFdeoQQI/tFhmEw\nGAwYDoecVz/aceZSG7bT6SQcDhOJRIhGowSDQQKBAIFAAIBkMkkymaTT6eD1epmdnWVlZYXr169T\nKBTI5XLk83l6vR4ul4vr169z+/Zttra22NzcZHt7G3j2ppNvvGg0ytWrV5XJp1IpUqkUnU5HG/YF\n4HQ6SSQSrK6ucu3aNSqVitJPG/bFI4TA4XCohZR50/4yzpdLbdhydZVIJFhYWGB6epqpqSmmpqYw\nDENtKKbTabxeLzMzM6yurnLr1i1+/OMf8+TJEx4+fIjT6eS9997j+vXr3Lx5k08++YR+v082m0WI\nZwfLpGlHIhGuXr2K2+0mGo3i8/nodDocHBxc8G/jciIN++7du3z88cdks1nsdjvFYpFUKnXRw7v0\nSMN2u914PB663S4Ag8FAG7aVMBulfJySpim/7vf7iUQiKlQhY9LxeBzDMCiVSuzu7uJwOFRIZGlp\nidXVVTY3N2m1Wuzu7hIIBLhz5w7Ly8t8+ctfplqt8vTpU7xeL+12W6X9dTod3G43c3Nz6hG8Xq+z\nt7eHw2FZKS4Em82G3W5Xug+HQ/UoLd8TdrudQCBAPB7n+vXrfOELX+Dp06ekUimCweBF/xc0PJun\ndrsdp9OpwpByHtvt9hFdzypEIt83drsdeH5zGA6HZ/L6p8FyLiGFlJfP5yMcDqvL4XCoSwozGAzY\n2dmhVCqRTqeJRqMAbGxskMvl6PV6GIaBYRhKNBna+PznP4/dbicUCtFoNHjy5An7+/uUy2W63S6t\nVotkMsm9e/cACIVCeL1evF4v/X7/UsfjzpNoNMrc3Bxzc3N4PB6y2SyZTIZMJkMwGGR2dpa5uTni\n8TgzMzMUCgX+/M//nL29PR4/fjyWGTyXEcMw1GIHUE/EkUgEu90+omuv1zv1v2ez2dR7Y25ujsFg\noF4/m82e+vVPi+UMG57dIV0uF263W62KFxcXWVhYwOPxqMerdrtNKpVib2+PZDI5kmsNcHBwQDab\nVW8Es2HL0IaMp4XD4RHDrlQq9Ho9ZdiGYZDP59U4FhcXsdvtDAYDbdbnQDQa5fr167z//vtEIhEe\nPnyIzWajUCgQDodZW1vj/fffZ2lpiVqtRqFQYHt7m3Q6TTKZ1IY9JkjDhmcr3Xg8zsrKCjdv3iQY\nDCpdi8XimRn23Nwct2/f5v3336fX6/Hw4UMGg4E27PNArrDdbjder1dldty+fZtbt27h9/vVVSgU\n+O53v0symWRnZ4d+v69W5gDtdptOp0O/3x9ZYZtj0X6/n1qtRqvVol6vk8/nSafTL6yw8/k8Gxsb\nvPfee3zwwQf4/X7C4fDYPGpZDWnYX/nKV5ibm1OTemNjg0gkwtraGl/+8pdZXV3lk08+YXt7m+9/\n//vkcjmazSatVuui/wsanhv2YDCg2+3i9Xq5evUqH330EXNzcwghlK5ngVxh3759m7/xN/4G7Xab\nwWAwNntMbzRsIcTvAD8DZA3DuHv4tRjwLZ6dnkoDf9swjPJ5DvQNY1SxLbvdjsPhQAihDHYwGNDr\n9eh0OiMhkX6/j9PpJBgMMjMzo+LN8mc8Ho/a8HA4HDQaDfb393n06JFK92s2m+oQjLzkCrvb7dLv\n96nVatRqNeDZQRoZv47FYuRyuQtJ6ZsEXQ/HpPYgABU+Gg6H+Hw+AoEAfr8fm81Go9FQl3nfoNPp\n0Ov1Rn5W/l2z2aRcLpPJZEgmkxSLJysVLd93MqNBlh6Q/+67YlJ0PS5yn0EushKJBMFgECEE/X5f\nbd4vLi5SLBZptVpqoXVSBoOBen90u913ruHrOM4K+18B/xL416av/RrwB4Zh/IYQ4pcO//yL5zC+\nY2Gz2UaMWMal2+02xWKR3d1der0ehUJB5Vx7PB4Mw6DVahGPx/na175Gt9tVV7vdVvnR9Xodr9dL\noVDgxz/+Mfv7+7TbbXXJ1bU0i3w+T7lcfukjWq1WI5lMYrPZCAQCPH36lEKhcBF1RcZeV3huhDI3\nXtZrGQ6HTE1NceXKFZV1s7u7y87ODru7u5RKJR4/fozdbiccDrO+vk46nabf71Mul3ny5AkOh4PN\nzU3W19fZ29tTGQgnweFwjKSF1mo16vU6tVrtVK97AiZC1+MiD55JnQOBAP1+nwcPHiCEoFwu4/f7\n+fDDD8lms6rsw0kNezgcks1mefjwIcPhkG63y/r6+ksPxV0EbzRswzC+I4S4euTLPw3I9kLfBL7H\nBRu23EWWuZr9fp9ut0uxWKTf7yvjNm84hsNhlpaW1NXr9dTjsIxryksafqFQUCtw+e/0ej1l9L1e\nj0ajQbPZfKVhp1Ip6vU6LpeLcrlMpVJ554Y9CbrC6H4EQKfTUWGpqakpbt68yRe/+EUCgQA/+MEP\naLVapFIpZdilUgmPx0M+nyeXy9Hv96lUKjx58oRKpYLP5yOfz5PP509lrE6nk0AgwPT0NNPT0+Ry\nOQBardY7NexJ0fW4eDweFhYW+NznPscXv/hFNY8fPnxIpVJRabjLy8vs7e1ht9up1WonflKShj0c\nDsnlcgwGA/L5/AslJy6Kk8awZwzDKAAYhpEXQsye4ZjeGrnCdrvduFwuWq0WnU6HdrtNvV6nWCyq\ndD4zy8vLhMNhtcI2hy+KxSL7+/vs7e2xv7+vDrikUikymcwrx/KmDUR5ik7m+I7ZhuNY6QqolC6P\nx6PCXPLmFovFuHHjBl/96leJRCK0222SySR2u51SqUS5XObx48fqteTvulwuUy6X2dzcfOHvTorD\n4SAQCDAzM8PCwgLwbA+kXB6LyMPY6Xpc5Ar785//PF//+te5d+8eW1tbPHjwgJ2dHT7++GOWl5f5\n8MMPiUaj1Ov1U+XPD4fDl2aEjMs8tcSmozkmKYQgHA6zsLBAIBDA6XSqvE0hhIppdjodwuEwTqeT\nbDbLD3/4QzqdjjpSXqlUKBQK6u6az+epVqt0Op0XxHM4HLhcLnV1u10V/3I6ncRiMWKxGFNTU1Sr\nVUqlEsViceQk3bi8IcYNuf/QbrfVaVMZdrhy5QqdTof79+8D8Nlnn5HNZtV+wJt+p2/6eyGE0tTt\ndqtHZHmZ6ff7atNZZgTV6/WxLKE7bsjkALlaNudaB4NBIpEI6XSaP/uzP2NjY4OtrS2q1aqq3/Pk\nyRM1j/f29s7khOq4zseTGnZOCDF9eLeeAS4032U4HKrwg81mY35+ntXVVVZWVgiFQiMHZ6Qh12o1\nhsMhLpeLbDZLs9mk2Wyqv5MxSPP1qnikzPc+Gr8cDofqkW5tbY21tTX29/dVZb9GowGM1ZtjrHSF\n54ZtGMZIeYCVlRWGwyHtdpv79+9TLpfZ2dkhk8mc2QauEAKPx6N0HQwGr3wf9Ho96vU6Qgja7bZ6\nD4yJYY+drmbkKvrGjRtcv34dp9M5Umu+0+mQTqfZ2dkhlUqxvb2tDDuXy+FwONTcTafTal5ZkZMa\n9reBvwv8xuHHb5/ZiE6AXGEPBgPsdjuRSITr16/z8ccfMzc3N3JySdb/yOfzFItFlSGwsbGhwhW1\nWo1Go6Hi00c/HsXhcOD1eolEIsRiMXUCq9Vq4fF4mJ+fV0ef19fXVU5nJpNRu89jYtpjpSugVrX9\nfh8hBLOzs9y5c4ePP/6Yx48f8+mnn/LgwQO2t7fVxu9ZGrbb7VYZPVL7TqfzginIcFq73aZUKo3U\nvBgDxk5XM16vl4WFBT744AO+8pWv4Ha7VXptqVTixz/+Mdvb2/zoRz+iVCqpBZRcYTcaDfb29tS5\nByunZB4nre/fAj8BTAshksA/A34F+JYQ4ueAA+Bnz3WUb0BW8ALUBJEm6vf7lfg2m02tpL1er0oF\nkyldskKbnHhvQsbFnU7nSOlVn8+n0pBCoRCLi4vMz88Tj8fJZrMqVHORTIKugLqhyXx1l8tFOBxm\ndnaW3d1d6vU6u7u7I7Hqs0SmFTocDobDoXpSO4o8vnyadLKzYFJ0NSOEUPsUfr8fn8/3QrGnbDbL\nxsYGzWZz5GeleZ8l5owz6S0v6yZ1ERwnS+TvvOKvvn7GYzkT5Op1fX0dIQTRaHSkPGOlUqFSqahY\nsoxRyzKpMgvhTZhzv71eL9FoVIViFhYWVMqfy+VStUmSyaTqNCNzvuWBnHfNpOkKz27G5XJZHR+X\n2SDnlYVhGAadTodqtQo8j1O/4zS9t2ISdW232+zt7XH//n16vR6BQECVb6jVaipN813kQgsh8Pl8\nBINBgsEgw+FwJER60Vhi09GM+QhpoVDA6/WObDrKpHqZPy1T8BqNhgp7HOeR2nygQxaGkrHqo/0e\n5aPx06dP2d/fV4Z9UWY9qZgNOxQKqYMu57WqlYYN0O12VTx1TMIclkGmYna7XdLptDpYFgwG6fV6\nqs7PuzhcJg17amqK2dlZ5SfyZn3RWM6wZVnTQqHAo0ePRir0Acok5Ym3o0Wdjmui5hrXcoUtDVtu\nUkOIU1sAABR1SURBVPn9ftrtNpubm2xubr5g2ONyempSkIadSqWw2Wyk0+lzN2x5g5fvIX2TPXuk\nYafTaex2u2q9J/eD8vn8O19hT01Nsbi4qEIh8qTyRWM5w4Z3VytXGn2r1SKfz7Ozs4PX61WlWuXq\nvtVqUSwWSSaTZDIZdWxd83aYU+dkISeZLXBemA3aXPkxGAyqQ0+VSuXCY9eTjMytlzFic1lTu91O\ntVpVT6TvYizdblcdnJOZQeMyXy1p2O8C86pcJuvLGtq3bt2i3+/j9/sxDGOkVkUulzt3k7Eq0rBt\nNptqr1ar1d7Z79Lv97O8vKz2Kba2ttja2lJ5/ZqzQZ44NgwDm82mTou+qyebRqOhzHo4HFIul8cm\n80Qb9gkxh1ZkE4Jyuczu7i6DwYBAIEAikcDpdFIul8lmsySTSSqVCu12e2zu2JOEebUjb3qyHMC7\nIBAIsLy8zOc+9zlu376N3+9XOcKVSuWdjOEyIPPuu92uKvL0rjI0DMOg2WwyGAxUGGScbsjasI/J\n0Q42R1N/ms2mqgkSi8WIx+MsLi6q4vnyarVaqhaJ5u2Qoa7jpFy+DeZNaXMI5OiKzlxf/fr16+zv\n76vTspqz4yJTJOVG87gY9FG0YR8TadKy1rYs8jMzM0O32+Xg4ICDgwMymQyFQoHHjx+rXNL19XXV\nCEGWb9WMB7JnoNTWHE89elPtdruUy2UODg7Y3d0lk8lQq9XGIj9XcznQhn1MzBUB/X6/WmWtra3R\nbDZ58OCBMu5isciTJ0+o1WrY7XZVTEaWBdVZBuODNGxZL0Su7MyHsSTdbpdKpUI6ncbv95PNZrVh\na94p2rCPiTRsj8dDKBRieXmZu3fv8qUvfUllfciuFIVCgVqtxs7ODkKIkVirNuvxQz41+Xw+1V3o\nZSYsDfvg4ACHw6ENW/POsb3pG4QQvyOEyAghPjV97VeFECkhxL3D6xvnO8yLx9ypZDAYqHKaU1NT\nxGIxgsEgLpcLQNXELpfLqvbBu9zlPg5a1+ccPX4sn4JsNhuhUIj5+Xlu3LjB0tISHo9Hnb4bR8PW\nulqbk3acMYBfNwzj189lVGOIuSJgs9lUJ99kUamjB3QmAK0rz3OA5Q1VFpsaDofY7Xamp6dVgwu7\n3a5qV+RyOYrFItVqdawMG62rpTlpxxmAiXKn0yIn8mAweKlhv6oo0LiidX2ODIPIkJXM4nG5XExP\nT3P9+nU++OADOp0Ojx49IpfLsbOzo+rFjFNOvdbV2rwxJPIa/rEQ4q+EEN88bPJ5aTCHR4426TSb\n9yQZuIlLpasMh8g+nuaGvTabTTXDuHXrFleuXMHv99Nqtdjb26NQKJxpOddz5lLpalVOati/CVwD\nbgObwL84sxGNKeFwmGvXrvHxxx/zkz/5kyQSCSqVCt///vf5/ve/r9oSzc/PMzU1RSAQwOGYuD3d\nS6fr65D9/B4/fswnn3zCvXv32N7eHpe2X2+D1tUinMhRDMNQLYSFEL8F/JczG9GYIg37/fffZ2Vl\nRbURk3W0a7UahmEwPz8/0lh3nB6X38Rl1PV1mA27Xq/T6XQ4ODiYuFONWlfrcCLDFkLMGoYh2wz9\nLeDh2Q1pPAmFQqyurvLRRx/x/vvv85d/+Zckk0k++eQTarUawWCQUChEIpHAbrePtACbFC6jrq9D\nGnatViOZTKqWZON6Cu5VaF2tw0k6zvwK8JNCiA8AF7AL/P1zHeU7Qsae5Ud5KlGmecmYdavVGqkP\n0mg0mJ2dVb0dB4PB2JfhvEy6vg0ulwuPx4PH48HlctFqtVSPxkmIVWtdrY04L1MRQoyvW70Ch8Oh\njpPbbDbVeKDX67G0tMTNmzd57733WFhYYH19nfX1df7qr/6Kdrutym6GQqGRrjZnXffipBiGcWY7\noJOo7XGJRCLMzc0xNzdHJBJR5QYymczYaGlG62pNXqXrxO2KnSd2u12tsOx2u6rB2+/3qVQqbG1t\nqfCHbOTbbrdV+EOW/5QdbcYsP1dzDHw+H4lEghs3bpBIJHj06BGDwUA1ndBoLhJt2CakYXu9XlWF\nT3brlqvmra0t9f3mp5PzaAaqeff4/X7m5+e5desW165dU2a9s7Nz0UPTaLRhm5H5uPIwTKfTUYcq\nJPLzl+VZj3PMWnM8ms0m6XSajY0NqtUqW1tb5PP5icr20VgXbdgmpGHLOhKyKe+rjNhs2q+qoayZ\nLKRhD4dDVTI3n8/rhhOasUBvOpp4XZbIq77XbNjjXDpVb04dDxkSO5ol0mq1xjJLROtqTV6lqzbs\nV2Cz2VR6l8fjUbVE5NFlycu6sI8jemI/x5y653Q6lSFPYid7ras10Vkib4nL5SKRSLCwsMDCwgLt\ndptsNksul6NQKKhTjLJg0LgateZFwuEwi4uLLCwsEAqFSKVS7O3tsbe3pzNBNGONNuxXIA37zp07\nfPjhh1QqFZ48ecLjx4/p9Xqqi7KMcWvDnhzC4TCrq6t88MEHxONx7t+/j2EY5HI5bdiasea1hi2E\nWAJ+F4jy7JTUbxuG8c8Pq319C5gD0sDfNgxj4irivA5p2Hfv3uUnfuInyOVyuFwums0mxWIRQIVG\nJu0x+jLrCs/rwnz00Uesrq4yHA7J5XJ89tlnFz20U3PZtbU6b6rW1wX+kWEYd4EvAj8vhPgQ+DXg\nDwzD+AD4w8M/WwpZH7nZbFKr1ajVatTrdRqNBs1mU6X8TZpZH2J5XX0+H7Ozs6yurnLz5k0WFhYI\nh8PY7XaazSYHBwc8efKEhw8f8vTpU0qlklUOOlle28vMa1fYhmFkgMzh53UhxH1gAfhp4KPDb/sm\n8D3gF89xnO+cwWBAo9GgUCiwt7dHPp8nm81SLBap1Wpq83ESQyGXQddAIKD2H7xeL/v7++zv79Nq\ntdQBKMMwiEajbG9vs7+/b4lc68ug7WXm2DHswy4WXwJ+DpgxDKMAz0o3CiFmz2V0F8hwOFSGnUql\nKBQKZLNZSqUS1WqVwWDAYDCY1BW2wqq6BoNBFhYWeO+99wiHwypFT5ZH3dzcJJfL4Xa7qVarVKtV\ny+VaW1Xby8yxDFsIEQB+D/hFwzCqE9pJ5QVsNhsul0sVfJIYhoHX6wWgWq2yt7dHsVgkl8tRLpdp\nNpsXNeQzxaq6AjidTtUkORqNsr+/j9frxWaz0Wg0aDQapNPpix7muWFlbS8zxymv6gT+PfC7hmH8\n/uGXc0KI6cM79QyQffUrjC9e7//f3rn8tnHccfwz4mtFUnyJlONISmA0huLGcZMCRQ3UDQK0BZz2\nWCCHope0uaUogh5apJcG/QPSF3rIob00bVG0ZzdpAhRIckqiWk4iGbbV6EHKEuUlxfeSFMXpgdw1\nrehhvmpxdz7Agqsld/CzvuLPszO/me84p0+ftg4hhNVzBizHGLOHnclkbFNFYGddobW3y8bGBl6v\nl4mJCVZXV8lms3YZpz4Su2vrZI6rEhHAH4AlKeWvOt66Anwf+HX79crQIhwi4+PjzM7Ocv78eZ58\n8kmEEFZ9dblcJp1Os7W1RTKZJJvNksvlbJGw7a4rtBL27du3qVar+Hw+stks2Wz2RK5WHCRO0NbJ\nHLnSUQhxCXgX+BgwP/gK8AF3S4S2gOf3lwiNwqqp2dlZnn32WeswN3yq1Wrous78/Dzz8/N89NFH\nZLPZA1c6jgqdK6f60bV9/4nX1u12W0fnvjBH7Q0ziuxfEWf376xT6Gmlo5TyfQ4v/ftWv0E9CPx+\nP4FAAL/fz8MPP4zf76darXL79m38fr9lYmCOd5q97Uqlcs9wyShjR133YyZnp+EEbZ2M41Y6hsNh\na8x6cnISl8vF2toauq4Tj8dJJBIkEgnLv69er9NoNKyKEDv1zhQKxWjhqIQthCAcDvPII48wNzdH\nMBgknU6zurpKOp3m0Ucf5ezZs0gprZ53Z8JWyVqhUDxIHJGwzZImM2FPT0/z+OOP4/P5yOVy6LrO\nwsICpVIJj8dDNBolFothGAb1et0W9dZ2x9wW1+VyMTY2hpTS0q3ZbH7OcELtX24PnKar7RP22NiY\nNfnk8XgIhUJEIhGi0SihUIi5uTncbjexWIxgMEg0GrWWoN+5c4dSqWSLcWu7o2kaU1NT1pHP59ne\n3iadTlMsFvF6vfh8PrxeL3t7e9RqNer1+khOICtamOsonKSrIxK2KaqmaYRCIcLhMNFolEQigcvl\nYnJykjNnzlAul6lWq5RKJXZ2dtB1nXK5rHrXI4DP52N6eponnniCc+fOkUwmWVpawjAMyuUymqYR\nDAYJBoPUajVKpRLNZtO2X2wnYO5Z7yRdHZGwzaqPQCDAxMQE4XCYSCTCQw89RCwW48yZM9TrdT77\n7DNu3brF9vY2yWRS9bBHCE3TmJ6e5sKFCzzzzDMsLi5iGAapVIp0Oo3P52NiYoJoNIphGNaksmJ0\nGRsbc5yutk/Ybrcbv99POBwmFosRi8UIhUIEAgF8Pp81odjZszYfpXO5nPVHoDjZmGPWu7u790wW\nm5U9piOQedh1jNNJOFFXRyTsYDBIIpHg9OnTJBIJQqEQXq+XarVKKpVifX2dtbU1kskkqVSKVCpF\nNpulVCpRq9Vs/0dgB6rVKhsbG1y7ds06X1lZoVAoIKWkVqtRLBYtq7dKpeLIOm074URdbZ+wPR4P\nExMTxONxZmZmiMfjBINBPB4PtVrN+pLPz8+TyWQoFArk83nK5TL1et1yUVecbEwtzddCoYCu69aX\nuVarWa97e3u2nphyCk7UtVfHmVeBF4E77Y++IqV8c5iB9oq5a1sikWBmZuaeHnY+nyeVSrGwsMA7\n77xj2X7ZPUHbQdf9mE9LGxsb1rVOHavVqq3HNk3sqO1hmOPVTtDV5Lgetule8Wl7u8b/CCHeorVH\nwWtSyteGHuEA6KzT1HUdwzBIJpPs7OywtLREOp223R4Tx2ALXQ/C1DASiRCPx5mcnETTNHRdJ5PJ\noOu6rR+ZsZm2mqYRj8etw3xy0nWdQqHwoMP7v9Or4wzASG6wq+s6yWQSwzDIZDKsrq6yvb3tqIlF\nO+q6n2g0ymOPPcbc3ByhUIibN29y48YN8vm8rRO23bQ1yzXPnTvH3NwcGxsbXL9+nXq9rhL2UXS4\nV7zQfn1JCPEiMA/8WEqZHUaAg0bXddbX10mlUmxvb1Mulx1dumcXXfcTiUQ4e/YsFy9eZGpqCo/H\nYznNOAU7aKtpGjMzMzz11FNcunSJpaUl6vW6rc0njuI4E17Acq/4Oy33iiLwe+ALwBeB/wK/HVqE\nfWKWezUaDer1Orlcjs3NTZaXl1leXmZzc5Niseik4RCLUdYVwOVy4fV68fv9+P1+fD4fbrcbIYT1\n3vj4OH6/H03TrPecwKhra2Kuo9A0jUAgwPj4OB6Ph7Gx+0pdtqMbx5m/mO4VUkq94/3XgX8PLcI+\nMct9DMOgWCxa+4M4aQjkIEZdV2g9LgcCAQKBAC6Xy7L+KpfL5HI5lpeX8Xq9RCIRFhcX2drasnUF\ngYkdtDUxJ5MXFhZoNBqsr69b5ZpOpCfHGSHElJTStBj6LrA4vBD7w1ymaibsSqXi+IRtB12h9bgc\nDoeZnJzE4/GQyWRoNptUKhV2dna4desWhUIBTdOsxVB2Hr8G+2hrYibsRqPB5uYmuVyOra0tlbAP\n4Wu07IQ+FkJcbV/7OfA9IcQFWmVDa8APhxdif3QW1JdKJQzDYHd319EJGxvoCq0edjgc5tSpU3i9\nXitZCyHI5XIUi0VWVlbu8ep0wFyFLbQ1MRP25uYmLpeLZrPpFB0PpFfHmX8OJ5zBYzrGZLNZXC4X\n2WyWcrls+57WUdhBV2hpW6lUyOVyeL1eSqWStdDJqV9qu2hrIqV0rHvQQdh+pWOj0aBYLCKEQNd1\npJSUSiVHjGXanVqtRj6fR0pJtVplb28PwzAcOYGscAa2T9i7u7tWz6ter+PxeCxDVsVoU61WaTab\nGIaBYRi43W52d3dVwlbYliNd0/tqWDkwnygOc2HuBaXtyUHpak8O09WZxYwKhUIxgqiErVAoFCPC\n0IZEFAqFQjFYVA9boVAoRgSVsBUKhWJEUAlboVAoRoShJmwhxGUhxCdCiCUhxM96bGNVCPGxEOKq\nEOKDLu77oxAiLYT4pONaTAjxdru9t4QQkR7beVUIkWrHdFUIcfmYNmaFEO+2fxc3hBA/7SWeI9rp\nKp5+GYSu7Xa61lbpOjyUrtbnT66upvPwoA/AB6zQ2jzdDXwIPN1DOytArIf7vg48DXzSce13wMvt\n85eB3/TYzi+An3QRyyngfPs8CNwEvtRtPEe001U8J0HXXrVVuipdnazrMHvYXwUWpZQbUsoG8Dfg\nOz221fXiACnle8DOvsvfBv7UPn/jfuI5pJ2uYpJSpqWUn7bPS4DpAtJVPEe001U8fTJIXaHLuJWu\nQ0PpereNE6vrMBP2DJDs+DnVvtYtEjAfQ37UZ0wJKWUGrP2Bp/po6yUhxHUhxBtCiNj93iTuuoC8\n3088He281088PTAoXWFw2ipd+0fpegAnTddhJuxBFXhflFJ+GfgG8IIQ4psDarcfenLvEC0XkH/Q\ncgHpeUNf8WDdRAZZuH/StFW6Dgal6+fbGYiuw0zYKWC24+dZ7v0f/L6Q7U3XpZR3aP3yvtJHTHeE\nEHEAIUQC2D7m84fFpMs2wOv3E5O46wLyZ9l2AeklHnGIm0i38fTBQHSFgWqrdO0fpWsHJ1XXYSbs\nD4HzQojpdtDP0+WevEIIvxDC3z4PAJfpzynjCq3N3Wm/XumlESFE56PQse4dQhzsAtJtPIe10208\nfdK3rjBwbZWu/aN0vfv5k6trLzOV93sAzwGfAkvAKz3cfwa4BizQmmH9ZRf3/hW4DdRp9RReAGLA\n27QG//8FRHpo5we0Jh6uAdeBN4HpY9q4BDTb/46r7eNyt/Ec0s5z3cbzoHXtR1ulq9LVybqqvUQU\nCoViRFArHRUKhWJEUAlboVAoRgSVsBUKhWJEUAlboVAoRgSVsBUKhWJEUAlboVAoRgSVsBUKhWJE\n+B95ZPpm8QyaLgAAAABJRU5ErkJggg==\n",
|
|
"text/plain": [
|
|
"<matplotlib.figure.Figure at 0x7f0475571290>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"%pylab\n",
|
|
"%matplotlib inline\n",
|
|
"\n",
|
|
"train_dp.reset()\n",
|
|
"x, t = train_dp.next()\n",
|
|
"img = x[0].reshape(28,28)\n",
|
|
"pds = [0.9, 0.7, 0.5, 0.2, 0.1]\n",
|
|
"imgs = [None] * (len(pds)+1)\n",
|
|
"imgs[0] = img\n",
|
|
"\n",
|
|
"for i, pd in enumerate(pds):\n",
|
|
" d = rng.binomial(1, pd, img.shape)\n",
|
|
" imgs[i + 1] = d*img\n",
|
|
"\n",
|
|
"fig, ax = plt.subplots(2,3)\n",
|
|
"ax[0, 0].imshow(imgs[0], cmap=cm.Greys_r)\n",
|
|
"ax[0, 1].imshow(imgs[1], cmap=cm.Greys_r)\n",
|
|
"ax[0, 2].imshow(imgs[2], cmap=cm.Greys_r)\n",
|
|
"ax[1, 0].imshow(imgs[3], cmap=cm.Greys_r)\n",
|
|
"ax[1, 1].imshow(imgs[4], cmap=cm.Greys_r)\n",
|
|
"ax[1, 2].imshow(imgs[5], cmap=cm.Greys_r)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Exercise 4: Implement Dropout \n",
|
|
"\n",
|
|
"Implement dropout regularisation technique. Then for the same initial configuration as in Exercise 1. investigate effectivness of different dropout rates applied to input features and/or hidden layers. Start with $p_{inp}=0.5$ and $p_{hid}=0.5$ and do some search for better settings.\n",
|
|
"\n",
|
|
"Implementation tips:\n",
|
|
"* Add a function `fprop_dropout` to `mlp.layers.MLP` class which (on top of `inputs` argument) takes also dropout-related argument(s) and perform dropout forward propagation through the model.\n",
|
|
"* One also would have to introduce required modificastions to `mlp.optimisers.SGDOptimiser.train_epoch()` function.\n",
|
|
"* Design and implemnt dropout scheduler in a similar way to how learning rates are handled (that is, allowing for some implementation dependent schedule which is kept independent of implementation in `mlp.optimisers.SGDOptimiser.train()`). \n",
|
|
" + For this exercise implement only fixed dropout scheduler - `DropoutFixed`, but implementation should allow to easily add other schedules in the future. \n",
|
|
" + Dropout scheduler of any type should return a tuple of two numbers $(p_{inp},\\; p_{hid})$, the first one is dropout factor for input features (data-points), and the latter dropout factor for hidden layers (assumed the same for all hidden layers)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"ERROR: Line magic function `%autoreload` not found.\n",
|
|
"INFO:root:Training started...\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Training cost (ce) for initial model is 2.624. Accuracy is 8.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 0: Validation cost (ce) for initial model is 2.554. Accuracy is 9.84%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Training cost (ce) is 3.224. Accuracy is 43.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Validation cost (ce) is 0.633. Accuracy is 81.88%\n",
|
|
"INFO:mlp.optimisers:Epoch 1: Took 10 seconds. Training speed 273 pps. Validation speed 1678 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Training cost (ce) is 0.957. Accuracy is 68.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Validation cost (ce) is 0.523. Accuracy is 84.89%\n",
|
|
"INFO:mlp.optimisers:Epoch 2: Took 10 seconds. Training speed 247 pps. Validation speed 1687 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Training cost (ce) is 0.797. Accuracy is 73.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Validation cost (ce) is 0.613. Accuracy is 80.22%\n",
|
|
"INFO:mlp.optimisers:Epoch 3: Took 10 seconds. Training speed 237 pps. Validation speed 1692 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Training cost (ce) is 0.730. Accuracy is 74.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Validation cost (ce) is 0.442. Accuracy is 86.47%\n",
|
|
"INFO:mlp.optimisers:Epoch 4: Took 10 seconds. Training speed 242 pps. Validation speed 1687 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Training cost (ce) is 0.596. Accuracy is 79.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Validation cost (ce) is 0.410. Accuracy is 87.96%\n",
|
|
"INFO:mlp.optimisers:Epoch 5: Took 10 seconds. Training speed 239 pps. Validation speed 1710 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Training cost (ce) is 0.598. Accuracy is 80.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Validation cost (ce) is 0.388. Accuracy is 88.27%\n",
|
|
"INFO:mlp.optimisers:Epoch 6: Took 10 seconds. Training speed 248 pps. Validation speed 1681 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Training cost (ce) is 0.513. Accuracy is 84.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Validation cost (ce) is 0.426. Accuracy is 87.53%\n",
|
|
"INFO:mlp.optimisers:Epoch 7: Took 10 seconds. Training speed 245 pps. Validation speed 1687 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Training cost (ce) is 0.451. Accuracy is 85.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Validation cost (ce) is 0.383. Accuracy is 88.44%\n",
|
|
"INFO:mlp.optimisers:Epoch 8: Took 10 seconds. Training speed 255 pps. Validation speed 1681 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Training cost (ce) is 0.507. Accuracy is 82.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Validation cost (ce) is 0.380. Accuracy is 89.12%\n",
|
|
"INFO:mlp.optimisers:Epoch 9: Took 10 seconds. Training speed 266 pps. Validation speed 1718 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Training cost (ce) is 0.464. Accuracy is 83.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Validation cost (ce) is 0.333. Accuracy is 90.46%\n",
|
|
"INFO:mlp.optimisers:Epoch 10: Took 10 seconds. Training speed 245 pps. Validation speed 1698 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Training cost (ce) is 0.399. Accuracy is 87.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Validation cost (ce) is 0.334. Accuracy is 90.56%\n",
|
|
"INFO:mlp.optimisers:Epoch 11: Took 10 seconds. Training speed 246 pps. Validation speed 1672 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Training cost (ce) is 0.422. Accuracy is 86.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Validation cost (ce) is 0.348. Accuracy is 89.54%\n",
|
|
"INFO:mlp.optimisers:Epoch 12: Took 10 seconds. Training speed 239 pps. Validation speed 1687 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Training cost (ce) is 0.396. Accuracy is 86.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Validation cost (ce) is 0.348. Accuracy is 89.37%\n",
|
|
"INFO:mlp.optimisers:Epoch 13: Took 10 seconds. Training speed 236 pps. Validation speed 1692 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Training cost (ce) is 0.360. Accuracy is 88.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Validation cost (ce) is 0.354. Accuracy is 89.56%\n",
|
|
"INFO:mlp.optimisers:Epoch 14: Took 10 seconds. Training speed 248 pps. Validation speed 1675 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Training cost (ce) is 0.362. Accuracy is 87.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Validation cost (ce) is 0.351. Accuracy is 89.66%\n",
|
|
"INFO:mlp.optimisers:Epoch 15: Took 10 seconds. Training speed 252 pps. Validation speed 1695 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Training cost (ce) is 0.325. Accuracy is 88.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Validation cost (ce) is 0.322. Accuracy is 90.95%\n",
|
|
"INFO:mlp.optimisers:Epoch 16: Took 10 seconds. Training speed 256 pps. Validation speed 1684 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Training cost (ce) is 0.351. Accuracy is 88.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Validation cost (ce) is 0.334. Accuracy is 90.02%\n",
|
|
"INFO:mlp.optimisers:Epoch 17: Took 10 seconds. Training speed 246 pps. Validation speed 1684 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Training cost (ce) is 0.277. Accuracy is 91.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Validation cost (ce) is 0.329. Accuracy is 90.35%\n",
|
|
"INFO:mlp.optimisers:Epoch 18: Took 10 seconds. Training speed 265 pps. Validation speed 1698 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Training cost (ce) is 0.297. Accuracy is 89.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Validation cost (ce) is 0.308. Accuracy is 91.28%\n",
|
|
"INFO:mlp.optimisers:Epoch 19: Took 10 seconds. Training speed 245 pps. Validation speed 1678 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Training cost (ce) is 0.295. Accuracy is 90.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Validation cost (ce) is 0.319. Accuracy is 90.59%\n",
|
|
"INFO:mlp.optimisers:Epoch 20: Took 10 seconds. Training speed 246 pps. Validation speed 1689 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Training cost (ce) is 0.289. Accuracy is 90.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Validation cost (ce) is 0.298. Accuracy is 91.54%\n",
|
|
"INFO:mlp.optimisers:Epoch 21: Took 10 seconds. Training speed 251 pps. Validation speed 1689 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Training cost (ce) is 0.248. Accuracy is 92.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Validation cost (ce) is 0.315. Accuracy is 90.82%\n",
|
|
"INFO:mlp.optimisers:Epoch 22: Took 10 seconds. Training speed 250 pps. Validation speed 1689 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Training cost (ce) is 0.268. Accuracy is 90.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Validation cost (ce) is 0.309. Accuracy is 90.99%\n",
|
|
"INFO:mlp.optimisers:Epoch 23: Took 10 seconds. Training speed 242 pps. Validation speed 1712 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Training cost (ce) is 0.257. Accuracy is 90.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Validation cost (ce) is 0.300. Accuracy is 91.57%\n",
|
|
"INFO:mlp.optimisers:Epoch 24: Took 10 seconds. Training speed 248 pps. Validation speed 1684 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Training cost (ce) is 0.240. Accuracy is 91.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Validation cost (ce) is 0.297. Accuracy is 91.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 25: Took 10 seconds. Training speed 246 pps. Validation speed 1692 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Training cost (ce) is 0.242. Accuracy is 91.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Validation cost (ce) is 0.297. Accuracy is 91.43%\n",
|
|
"INFO:mlp.optimisers:Epoch 26: Took 10 seconds. Training speed 277 pps. Validation speed 1681 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Training cost (ce) is 0.228. Accuracy is 93.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Validation cost (ce) is 0.289. Accuracy is 92.07%\n",
|
|
"INFO:mlp.optimisers:Epoch 27: Took 10 seconds. Training speed 245 pps. Validation speed 1687 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Training cost (ce) is 0.241. Accuracy is 92.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Validation cost (ce) is 0.320. Accuracy is 90.39%\n",
|
|
"INFO:mlp.optimisers:Epoch 28: Took 10 seconds. Training speed 246 pps. Validation speed 1698 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Training cost (ce) is 0.227. Accuracy is 92.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Validation cost (ce) is 0.283. Accuracy is 91.96%\n",
|
|
"INFO:mlp.optimisers:Epoch 29: Took 10 seconds. Training speed 234 pps. Validation speed 1715 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Training cost (ce) is 0.217. Accuracy is 93.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Validation cost (ce) is 0.299. Accuracy is 91.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 30: Took 10 seconds. Training speed 268 pps. Validation speed 1730 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 31: Training cost (ce) is 0.191. Accuracy is 93.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 31: Validation cost (ce) is 0.277. Accuracy is 92.34%\n",
|
|
"INFO:mlp.optimisers:Epoch 31: Took 10 seconds. Training speed 254 pps. Validation speed 1692 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 32: Training cost (ce) is 0.212. Accuracy is 93.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 32: Validation cost (ce) is 0.287. Accuracy is 91.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 32: Took 11 seconds. Training speed 215 pps. Validation speed 1710 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 33: Training cost (ce) is 0.170. Accuracy is 94.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 33: Validation cost (ce) is 0.302. Accuracy is 91.77%\n",
|
|
"INFO:mlp.optimisers:Epoch 33: Took 10 seconds. Training speed 244 pps. Validation speed 1724 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 34: Training cost (ce) is 0.206. Accuracy is 93.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 34: Validation cost (ce) is 0.277. Accuracy is 92.13%\n",
|
|
"INFO:mlp.optimisers:Epoch 34: Took 11 seconds. Training speed 213 pps. Validation speed 1701 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 35: Training cost (ce) is 0.191. Accuracy is 93.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 35: Validation cost (ce) is 0.286. Accuracy is 92.08%\n",
|
|
"INFO:mlp.optimisers:Epoch 35: Took 10 seconds. Training speed 253 pps. Validation speed 1678 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 36: Training cost (ce) is 0.191. Accuracy is 92.00%\n",
|
|
"INFO:mlp.optimisers:Epoch 36: Validation cost (ce) is 0.289. Accuracy is 91.89%\n",
|
|
"INFO:mlp.optimisers:Epoch 36: Took 11 seconds. Training speed 194 pps. Validation speed 1687 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 37: Training cost (ce) is 0.190. Accuracy is 92.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 37: Validation cost (ce) is 0.280. Accuracy is 92.15%\n",
|
|
"INFO:mlp.optimisers:Epoch 37: Took 11 seconds. Training speed 192 pps. Validation speed 1678 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 38: Training cost (ce) is 0.200. Accuracy is 93.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 38: Validation cost (ce) is 0.317. Accuracy is 91.17%\n",
|
|
"INFO:mlp.optimisers:Epoch 38: Took 11 seconds. Training speed 186 pps. Validation speed 1684 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 39: Training cost (ce) is 0.198. Accuracy is 93.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 39: Validation cost (ce) is 0.301. Accuracy is 91.26%\n",
|
|
"INFO:mlp.optimisers:Epoch 39: Took 11 seconds. Training speed 194 pps. Validation speed 1678 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 40: Training cost (ce) is 0.159. Accuracy is 94.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 40: Validation cost (ce) is 0.277. Accuracy is 92.25%\n",
|
|
"INFO:mlp.optimisers:Epoch 40: Took 11 seconds. Training speed 212 pps. Validation speed 1667 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 41: Training cost (ce) is 0.150. Accuracy is 94.70%\n",
|
|
"INFO:mlp.optimisers:Epoch 41: Validation cost (ce) is 0.265. Accuracy is 92.79%\n",
|
|
"INFO:mlp.optimisers:Epoch 41: Took 11 seconds. Training speed 195 pps. Validation speed 1710 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 42: Training cost (ce) is 0.147. Accuracy is 95.20%\n",
|
|
"INFO:mlp.optimisers:Epoch 42: Validation cost (ce) is 0.268. Accuracy is 92.54%\n",
|
|
"INFO:mlp.optimisers:Epoch 42: Took 10 seconds. Training speed 238 pps. Validation speed 1684 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 43: Training cost (ce) is 0.171. Accuracy is 93.90%\n",
|
|
"INFO:mlp.optimisers:Epoch 43: Validation cost (ce) is 0.281. Accuracy is 91.75%\n",
|
|
"INFO:mlp.optimisers:Epoch 43: Took 10 seconds. Training speed 229 pps. Validation speed 1713 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 44: Training cost (ce) is 0.123. Accuracy is 95.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 44: Validation cost (ce) is 0.276. Accuracy is 92.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 44: Took 11 seconds. Training speed 210 pps. Validation speed 1678 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 45: Training cost (ce) is 0.136. Accuracy is 95.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 45: Validation cost (ce) is 0.288. Accuracy is 91.73%\n",
|
|
"INFO:mlp.optimisers:Epoch 45: Took 11 seconds. Training speed 185 pps. Validation speed 1675 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 46: Training cost (ce) is 0.178. Accuracy is 93.60%\n",
|
|
"INFO:mlp.optimisers:Epoch 46: Validation cost (ce) is 0.286. Accuracy is 92.01%\n",
|
|
"INFO:mlp.optimisers:Epoch 46: Took 11 seconds. Training speed 213 pps. Validation speed 1672 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 47: Training cost (ce) is 0.139. Accuracy is 95.40%\n",
|
|
"INFO:mlp.optimisers:Epoch 47: Validation cost (ce) is 0.262. Accuracy is 92.84%\n",
|
|
"INFO:mlp.optimisers:Epoch 47: Took 11 seconds. Training speed 186 pps. Validation speed 1687 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 48: Training cost (ce) is 0.141. Accuracy is 95.30%\n",
|
|
"INFO:mlp.optimisers:Epoch 48: Validation cost (ce) is 0.265. Accuracy is 92.80%\n",
|
|
"INFO:mlp.optimisers:Epoch 48: Took 11 seconds. Training speed 199 pps. Validation speed 1675 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 49: Training cost (ce) is 0.120. Accuracy is 95.50%\n",
|
|
"INFO:mlp.optimisers:Epoch 49: Validation cost (ce) is 0.261. Accuracy is 92.97%\n",
|
|
"INFO:mlp.optimisers:Epoch 49: Took 11 seconds. Training speed 206 pps. Validation speed 1684 pps.\n",
|
|
"INFO:mlp.optimisers:Epoch 50: Training cost (ce) is 0.123. Accuracy is 96.10%\n",
|
|
"INFO:mlp.optimisers:Epoch 50: Validation cost (ce) is 0.272. Accuracy is 92.63%\n",
|
|
"INFO:mlp.optimisers:Epoch 50: Took 10 seconds. Training speed 248 pps. Validation speed 1670 pps.\n",
|
|
"INFO:root:Testing the model on test set:\n",
|
|
"INFO:root:MNIST test set accuracy is 91.94 %, cost (ce) is 0.291\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"%autoreload\n",
|
|
"\n",
|
|
"import numpy\n",
|
|
"import logging\n",
|
|
"\n",
|
|
"from mlp.layers import MLP, Linear, Sigmoid, Softmax #import required layer types\n",
|
|
"from mlp.optimisers import SGDOptimiser #import the optimiser\n",
|
|
"from mlp.dataset import MNISTDataProvider #import data provider\n",
|
|
"from mlp.costs import CECost #import the cost we want to use for optimisation\n",
|
|
"from mlp.schedulers import LearningRateFixed, DropoutFixed\n",
|
|
"\n",
|
|
"logger = logging.getLogger()\n",
|
|
"logger.setLevel(logging.INFO)\n",
|
|
"rng = numpy.random.RandomState([2015,10,10])\n",
|
|
"\n",
|
|
"#some hyper-parameters\n",
|
|
"nhid = 800\n",
|
|
"learning_rate = 0.5\n",
|
|
"max_epochs = 50\n",
|
|
"l1_weight = 0.0\n",
|
|
"l2_weight = 0.0\n",
|
|
"cost = CECost()\n",
|
|
" \n",
|
|
"stats = []\n",
|
|
"layer = 1\n",
|
|
"for i in xrange(1, 2):\n",
|
|
"\n",
|
|
" train_dp.reset()\n",
|
|
" valid_dp.reset()\n",
|
|
" test_dp.reset()\n",
|
|
" \n",
|
|
" #define the model\n",
|
|
" model = MLP(cost=cost)\n",
|
|
" model.add_layer(Sigmoid(idim=784, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" for i in xrange(1, layer):\n",
|
|
" logger.info(\"Stacking hidden layer (%s)\" % str(i+1))\n",
|
|
" model.add_layer(Sigmoid(idim=nhid, odim=nhid, irange=0.2, rng=rng))\n",
|
|
" model.add_layer(Softmax(idim=nhid, odim=10, rng=rng))\n",
|
|
"\n",
|
|
" # define the optimiser, here stochasitc gradient descent\n",
|
|
" # with fixed learning rate and max_epochs\n",
|
|
" lr_scheduler = LearningRateFixed(learning_rate=learning_rate, max_epochs=max_epochs)\n",
|
|
" dp_scheduler = DropoutFixed(0.5, 0.5)\n",
|
|
" optimiser = SGDOptimiser(lr_scheduler=lr_scheduler, \n",
|
|
" dp_scheduler=dp_scheduler,\n",
|
|
" l1_weight=l1_weight, \n",
|
|
" l2_weight=l2_weight)\n",
|
|
"\n",
|
|
" logger.info('Training started...')\n",
|
|
" tr_stats, valid_stats = optimiser.train(model, train_dp, valid_dp)\n",
|
|
"\n",
|
|
" logger.info('Testing the model on test set:')\n",
|
|
" tst_cost, tst_accuracy = optimiser.validate(model, test_dp)\n",
|
|
" logger.info('MNIST test set accuracy is %.2f %%, cost (%s) is %.3f'%(tst_accuracy*100., cost.get_name(), tst_cost))\n",
|
|
" \n",
|
|
" stats.append((tr_stats, valid_stats, (tst_cost, tst_accuracy)))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": true
|
|
},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 2",
|
|
"language": "python",
|
|
"name": "python2"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 2
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython2",
|
|
"version": "2.7.9"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
}
|