mlpractical/notebooks/03_Multi_layer_models.ipynb

304 lines
16 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction\n",
"\n",
"This tutorial is an introduction to the first coursework about multi-layer networks (also known as Multi-Layer Perceptrons - MLPs - or Deep Neural Networks - DNNs). Here, we will show how to build a single layer linear model (similar to the one from the previous lab) for MNIST digit classification using the provided code-base. \n",
"\n",
"The principal purpose of this introduction is to get you familiar with how to connect the code blocks (and what operations each of them implements) in order to set up an experiment that includes 1) building the model structure 2) optimising the model's parameters (weights) and 3) evaluating the model on test data. \n",
"\n",
"## For those affected by notebook kernel issues\n",
"\n",
"In case you are still having issues with running notebook kernels, have a look at [this note](https://github.com/CSTR-Edinburgh/mlpractical/blob/master/kernel_issue_fix.md) on the GitHub.\n",
"\n",
"## Virtual environments\n",
"\n",
"Before you proceed onwards, remember to activate your virtual environment:\n",
" * If you were in last week's Tuesday or Wednesday group type `activate_mlp` or `source ~/mlpractical/venv/bin/activate`\n",
" * If you were in the Monday group:\n",
" + and if you have chosen the **comfy** way type: `workon mlpractical`\n",
" + and if you have chosen the **generic** way, `source` your virutal environment using `source` and specyfing the path to the activate script (you need to localise it yourself, there were not any general recommendations w.r.t dir structure and people have installed it in different places, usually somewhere in the home directories. If you cannot easily find it by yourself, use something like: `find . -iname activate` ):\n",
"\n",
"## Syncing the git repository\n",
"\n",
"Look <a href=\"https://github.com/CSTR-Edinburgh/mlpractical/blob/master/gitFAQ.md\">here</a> for more details. But in short, we recommend to create a separate branch for the coursework, as follows:\n",
"\n",
"1. Enter the mlpractical directory `cd ~/mlpractical/repo-mlp`\n",
"2. List the branches and check which is currently active by typing: `git checkout`\n",
"3. If you are not in `master` branch, switch to it by typing: \n",
"```\n",
"git checkout master\n",
" ```\n",
"4. Then update the repository (note, assuming master does not have any conflicts), if there are some, have a look <a href=\"https://github.com/CSTR-Edinburgh/mlpractical/blob/master/gitFAQ.md\">here</a>\n",
"```\n",
"git pull\n",
"```\n",
"5. And now, create the new branch & swith to it by typing:\n",
"```\n",
"git checkout -b coursework1\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multi Layer Models\n",
"\n",
"Today, we shall build models which can have an arbitrary number of hidden layers. Please have a look at the diagram below, and the corresponding computations (which have an *exact* matrix form as expected by numpy, and row-wise orientation; note that $\\circ$ denotes an element-wise product). In the diagram, we briefly describe how each comptation relates to the code we have provided.\n",
"\n",
"![Making Predictions](res/code_scheme.svg)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Structuring the model\n",
" * The model (for now) is allowed to have a sequence of layers, mapping inputs $\\mathbf{x}$ to outputs $\\mathbf{y}$. \n",
" * This operation is implemented as a special type of a layer in `mlp.layers.MLP` class. It keeps a sequence of other layers (of various typyes like Linear, Sigmoid, Softmax, etc.) as well as the internal state of a model for a mini-batch, that is, the intermediate data produced in *forward* and *backward* passes.\n",
"2. Forward computation\n",
" * `mlp.layers.MLP` provides an `fprop()` method that iterates over defined layers propagates $\\mathbf{x}$ to $\\mathbf{y}$. \n",
" * Each layer (look at `mlp.layers.Linear` attached below) also implements an `fprop()` method, which performs an atomic, for the given layer, operation. Most often, for the $i$-th layer, we want to obtain a linear transform $\\mathbf a^i$ of the inputs, and apply some non-linear transfer function $f^i(\\mathbf a^i)$ to produce the output $\\mathbf h^i$. Note, in general each layer may implement different activation functions $f^i()$, however for now we will use only `sigmoid` and `softmax`\n",
"3. Backward computation\n",
" * Similarly, `mlp.layers.MLP` also implements a `bprop()` function, to back-propagate the errors from the top to the bottom layer. This class also keeps the back-propagated statistics ($\\delta$) to be used later when computing the gradients with respect to the parameters.\n",
" * This functionality is also re-implemented by particular layers (again, have a look at the `bprop` function of `mlp.layers.Linear`). `bprop()` returns both $\\delta$ (needed to update the parameters) but also back-progapates the gradient down to the inputs. Also note, that depending on whether the layer is the top or not (i.e. if it deals directly with the cost function or not) some simplifications may apply ( as with cross-entropy and softmax). That's why when implementing a new type of layer that may be used as an output layer one also need to specify the implementation of `bprop_cost()`.\n",
"4. Learning the model\n",
" * The actual evaluation of the cost as well as the *forward* and *backward* passes may be found in the `train_epoch()` method of `mlp.optimisers.SGDOptimiser`\n",
" * This function also calls the `pgrads()` method on each layer, that given activations and deltas, returns the list of the gradients of the cost with respect to the model parameters, i.e. $\\frac{\\partial{\\mathbf{E}}}{\\partial{\\mathbf{W^i}}}$ and $\\frac{\\partial{\\mathbf{E}}}{\\partial{\\mathbf{b}^i}}$ at the above diagram (look at an example implementation in `mlp.layers.Linear`)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"Example code for the above\n",
"```python\n",
"# %load -s Linear mlp/layers.py\n",
"class Linear(Layer):\n",
"\n",
" def __init__(self, idim, odim,\n",
" rng=None,\n",
" irange=0.1):\n",
"\n",
" super(Linear, self).__init__(rng=rng)\n",
"\n",
" self.idim = idim\n",
" self.odim = odim\n",
"\n",
" self.W = self.rng.uniform(\n",
" -irange, irange,\n",
" (self.idim, self.odim))\n",
"\n",
" self.b = numpy.zeros((self.odim,), dtype=numpy.float32)\n",
"\n",
" def fprop(self, inputs):\n",
" \"\"\"\n",
" Implements a forward propagation through the i-th layer, that is\n",
" some form of:\n",
" a^i = xW^i + b^i\n",
" h^i = f^i(a^i)\n",
" with f^i, W^i, b^i denoting a non-linearity, weight matrix and\n",
" biases of this (i-th) layer, respectively and x denoting inputs.\n",
"\n",
" :param inputs: matrix of features (x) or the output of the previous layer h^{i-1}\n",
" :return: h^i, matrix of transformed by layer features\n",
" \"\"\"\n",
" a = numpy.dot(inputs, self.W) + self.b\n",
" # here f() is an identity function, so just return a linear transformation\n",
" return a\n",
"\n",
" def bprop(self, h, igrads):\n",
" \"\"\"\n",
" Implements a backward propagation through the layer, that is, given\n",
" h^i denotes the output of the layer and x^i the input, we compute:\n",
" dh^i/dx^i which by chain rule is dh^i/da^i da^i/dx^i\n",
" x^i could be either features (x) or the output of the lower layer h^{i-1}\n",
" :param h: it's an activation produced in forward pass\n",
" :param igrads, error signal (or gradient) flowing to the layer, note,\n",
" this in general case does not corresponds to 'deltas' used to update\n",
" the layer's parameters, to get deltas ones need to multiply it with\n",
" the dh^i/da^i derivative\n",
" :return: a tuple (deltas, ograds) where:\n",
" deltas = igrads * dh^i/da^i\n",
" ograds = deltas \\times da^i/dx^i\n",
" \"\"\"\n",
"\n",
" # since df^i/da^i = 1 (f is assumed identity function),\n",
" # deltas are in fact the same as igrads\n",
" ograds = numpy.dot(igrads, self.W.T)\n",
" return igrads, ograds\n",
"\n",
" def bprop_cost(self, h, igrads, cost):\n",
" \"\"\"\n",
" Implements a backward propagation in case the layer directly\n",
" deals with the optimised cost (i.e. the top layer)\n",
" By default, method should implement a bprop for default cost, that is\n",
" the one that is natural to the layer's output, i.e.:\n",
" here we implement linear -> mse scenario\n",
" :param h: it's an activation produced in forward pass\n",
" :param igrads, error signal (or gradient) flowing to the layer, note,\n",
" this in general case does not corresponds to 'deltas' used to update\n",
" the layer's parameters, to get deltas ones need to multiply it with\n",
" the dh^i/da^i derivative\n",
" :param cost, mlp.costs.Cost instance defining the used cost\n",
" :return: a tuple (deltas, ograds) where:\n",
" deltas = igrads * dh^i/da^i\n",
" ograds = deltas \\times da^i/dx^i\n",
" \"\"\"\n",
"\n",
" if cost is None or cost.get_name() == 'mse':\n",
" # for linear layer and mean square error cost,\n",
" # cost back-prop is the same as standard back-prop\n",
" return self.bprop(h, igrads)\n",
" else:\n",
" raise NotImplementedError('Linear.bprop_cost method not implemented '\n",
" 'for the %s cost' % cost.get_name())\n",
"\n",
" def pgrads(self, inputs, deltas):\n",
" \"\"\"\n",
" Return gradients w.r.t parameters\n",
"\n",
" :param inputs, input to the i-th layer\n",
" :param deltas, deltas computed in bprop stage up to -ith layer\n",
" :return list of grads w.r.t parameters dE/dW and dE/db in *exactly*\n",
" the same order as the params are returned by get_params()\n",
"\n",
" Note: deltas here contain the whole chain rule leading\n",
" from the cost up to the the i-th layer, i.e.\n",
" dE/dy^L dy^L/da^L da^L/dh^{L-1} dh^{L-1}/da^{L-1} ... dh^{i}/da^{i}\n",
" and here we are just asking about\n",
" 1) da^i/dW^i and 2) da^i/db^i\n",
" since W and b are only layer's parameters\n",
" \"\"\"\n",
"\n",
" grad_W = numpy.dot(inputs.T, deltas)\n",
" grad_b = numpy.sum(deltas, axis=0)\n",
"\n",
" return [grad_W, grad_b]\n",
"\n",
" def get_params(self):\n",
" return [self.W, self.b]\n",
"\n",
" def set_params(self, params):\n",
" #we do not make checks here, but the order on the list\n",
" #is assumed to be exactly the same as get_params() returns\n",
" self.W = params[0]\n",
" self.b = params[1]\n",
"\n",
" def get_name(self):\n",
" return 'linear'\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 1: Experiment with linear models and MNIST\n",
"\n",
"The below snippet demonstrates how to use the code we have provided for the coursework 1. Get familiar with it, as from now on we will use till the end of the course, including the 2nd coursework.\n",
"\n",
"It should be straightforward to extend the following code to more complex models, like stack more layers, change the cost, the optimiser, learning rate schedules, etc.. But **ask** in case something is not clear.\n",
"\n",
"In this particular example, we use the following components:\n",
" * One layer mapping data-points ($\\mathbf x$) straight to 10 digits classes represented as 10 (linear) outputs ($\\mathbf y$). This operation is implemented as a linear layer in `mlp.layers.Linear`. Get familiar with this class (read the comments, etc.) as it is going to be a building block for the coursework.\n",
" * One can stack as many different layers as required through the container `mlp.layers.MLP`\n",
" * As an objective here we use the Mean Square Error cost defined in `mlp.costs.MSECost`\n",
" * Our *Stochastic Gradient Descent* optimiser can be found in `mlp.optimisers.SGDOptimiser`. Its parent `mlp.optimisers.Optimiser` implements validation functionality (and an interface in case one need to implement a different optimiser)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy\n",
"import logging\n",
"\n",
"logger = logging.getLogger()\n",
"logger.setLevel(logging.INFO)\n",
"\n",
"from mlp.layers import MLP, Linear #import required layer types\n",
"from mlp.optimisers import SGDOptimiser #import the optimiser\n",
"from mlp.dataset import MNISTDataProvider #import data provider\n",
"from mlp.costs import MSECost #import the cost we want to use for optimisation\n",
"from mlp.schedulers import LearningRateFixed\n",
"\n",
"rng = numpy.random.RandomState([2015,10,10])\n",
"\n",
"# define the model structure, here just one linear layer\n",
"# and mean square error cost\n",
"cost = MSECost()\n",
"model = MLP(cost=cost)\n",
"model.add_layer(Linear(idim=784, odim=10, rng=rng))\n",
"#one can stack more layers here\n",
"\n",
"# define the optimiser, here stochasitc gradient descent\n",
"# with fixed learning rate and max_epochs as stopping criterion\n",
"lr_scheduler = LearningRateFixed(learning_rate=0.01, max_epochs=20)\n",
"optimiser = SGDOptimiser(lr_scheduler=lr_scheduler)\n",
"\n",
"logger.info('Initialising data providers...')\n",
"train_dp = MNISTDataProvider(dset='train', batch_size=100, max_num_batches=-10, randomize=True)\n",
"valid_dp = MNISTDataProvider(dset='valid', batch_size=100, max_num_batches=-10, randomize=False)\n",
"\n",
"logger.info('Training started...')\n",
"optimiser.train(model, train_dp, valid_dp)\n",
"\n",
"logger.info('Testing the model on test set:')\n",
"test_dp = MNISTDataProvider(dset='eval', batch_size=100, max_num_batches=-10, randomize=False)\n",
"cost, accuracy = optimiser.validate(model, test_dp)\n",
"logger.info('MNIST test set accuracy is %.2f %% (cost is %.3f)'%(accuracy*100., cost))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise\n",
"\n",
"Modify the above code by adding an intemediate linear layer of size 200 hidden units between input and output layers."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}