mlpractical/notebooks/03_Multiple_layer_models.ipynb
2024-10-03 21:53:33 +08:00

1118 lines
54 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$\n",
"\\newcommand{\\vct}[1]{\\boldsymbol{#1}}\n",
"\\newcommand{\\mtx}[1]{\\mathbf{#1}}\n",
"\\newcommand{\\tr}{^\\mathrm{T}}\n",
"\\newcommand{\\reals}{\\mathbb{R}}\n",
"\\newcommand{\\lpa}{\\left(}\n",
"\\newcommand{\\rpa}{\\right)}\n",
"\\newcommand{\\lsb}{\\left[}\n",
"\\newcommand{\\rsb}{\\right]}\n",
"\\newcommand{\\lbr}{\\left\\lbrace}\n",
"\\newcommand{\\rbr}{\\right\\rbrace}\n",
"\\newcommand{\\fset}[1]{\\lbr #1 \\rbr}\n",
"\\newcommand{\\pd}[2]{\\frac{\\partial #1}{\\partial #2}}\n",
"$\n",
"\n",
"\n",
"# Multiple layer models and Activation Functions\n",
"\n",
"In this notebook we will explore network models with multiple layers of transformations. This will build upon the single-layer affine model we looked at in the previous notebook and use material covered in the second and third lectures.\n",
"\n",
"You will need to use these models for the experiments you will be running in the first coursework so part of the aim of this lab will be to get you familiar with how to construct multiple layer models in our framework and how to train them.\n",
"\n",
"## What is a layer?\n",
"\n",
"Often when discussing (neural) network models, a network layer is taken to mean an input to output transformation of the form\n",
"\n",
"\\begin{equation}\n",
" \\boldsymbol{y} = \\boldsymbol{f}(\\mathbf{W} \\boldsymbol{x} + \\boldsymbol{b})\n",
" \\qquad\n",
" \\Leftrightarrow\n",
" \\qquad\n",
" y_k = f\\left(\\sum_{d=1}^D \\left( W_{kd} x_d \\right) + b_k \\right)\n",
"\\end{equation}\n",
"\n",
"where $\\mathbf{W}$ and $\\boldsymbol{b}$ parameterise an affine transformation as discussed in the previous notebook, and $f$ is a function applied elementwise to the result of the affine transformation (sometimes called the activation function). For example a common choice for $f$ is the logistic sigmoid function \n",
"\\begin{equation}\n",
" f(u) = \\frac{1}{1 + \\exp(-u)}.\n",
"\\end{equation}\n",
"\n",
"In the second lecture slides you were shown how to train a model consisting of an affine transformation followed by the elementwise logistic sigmoid using gradient descent. This was referred to as a 'sigmoid single-layer network'.\n",
"\n",
"In the previous notebook we also referred to single-layer models, where in that case the layer was an affine transformation, with you implementing the various necessary methods for the `AffineLayer` class before using an instance of that class within a `SingleLayerModel` on a regression problem. We could in that case consider the activation function $f$ to be the identity function $f(u) = u$. In the code for the labs we will however use a slightly different convention. Here we will consider the affine transformation and the subsequent activation function $f$ to be two separate transformation layers. \n",
"\n",
"This allows us to combine our already implemented `AffineLayer` class with any non-linear activation function applied to the outputs by simply implementing a layer object for the relevant non-linearity and then stacking the two layers together. An alternative would be to have our new layer objects inherit from `AffineLayer` and then call the relevant parent class methods in the child class; however, this would mean we need to duplicate a lot of the same boilerplate code in every new class.\n",
"\n",
"To give a concrete example, in the `mlp.layers` module there is a definition for a `SigmoidLayer` equivalent to the following (documentation strings have been removed here for brevity)\n",
"\n",
"```python\n",
"class SigmoidLayer(Layer):\n",
"\n",
" def fprop(self, inputs):\n",
" return 1. / (1. + np.exp(-inputs))\n",
"\n",
" def bprop(self, inputs, outputs, grads_wrt_outputs):\n",
" return grads_wrt_outputs * outputs * (1. - outputs)\n",
"```\n",
"\n",
"As you can see this `SigmoidLayer` class has a very lightweight definition, defining just two key methods:\n",
"\n",
" * `fprop` which takes a batch of values at the input to the layer and forward propagates them to produce activations at the outputs (directly equivalently to the `fprop` method you implemented for then `AffineLayer` in the previous notebook),\n",
" * `brop` which takes a batch of gradients with respect to the outputs of the layer and backward propagates them to calculate gradients with respect to the inputs of the layer (explained in more detail below).\n",
" \n",
"This `SigmoidLayer` class only implements the logistic sigmoid non-linearity transformation and so does not have any parameters. Therefore unlike `AffineLayer` it is derived directly from the base `Layer` class rather than `LayerWithParameters` and does not need to implement `grads_wrt_params` or `params` methods. \n",
"\n",
"To create a model consisting of an affine transformation followed by applying an elementwise logistic sigmoid transformation we first create a list of the two layer objects (in the order they are applied from inputs to outputs) and then use this to instantiate a new `MultipleLayerModel` object:\n",
"\n",
"```python\n",
"from mlp.layers import AffineLayer, SigmoidLayer\n",
"from mlp.models import MultipleLayerModel\n",
"\n",
"layers = [AffineLayer(input_dim, output_dim), SigmoidLayer()]\n",
"model = MultipleLayerModel(layers)\n",
"```\n",
"\n",
"Because of the modular way in which the layers are defined we can also stack an arbitrarily long sequence of layers together to produce deeper models. For instance the following would define a model consisting of three pairs of affine and logistic sigmoid transformations.\n",
"\n",
"```python\n",
"model = MultipleLayerModel([\n",
" AffineLayer(input_dim, hidden_dim), SigmoidLayer(),\n",
" AffineLayer(hidden_dim, hidden_dim), SigmoidLayer(),\n",
" AffineLayer(hidden_dim, output_dim), SigmoidLayer(),\n",
"])\n",
"```\n",
"\n",
"## Back-propagation of gradients\n",
" \n",
"To allow training models consisting of a stack of multiple layers, all layers need to implement a `bprop` method in addition to the `fprop` we encountered in the previous week. \n",
"\n",
"The `bprop` method takes gradients of an error function with respect to the *outputs* of a layer and uses these gradients to calculate gradients of the error function with respect to the *inputs* of a layer. As the inputs to a hidden layer in a multiple-layer model consist of the outputs of the previous layer, this means we can calculate the gradients of the error function with respect to the outputs of every layer in the model by iteratively propagating the gradients backwards through the layers of the model (i.e. from the last to first layer), hence the term 'back-propagation' or 'bprop' for short. A block diagram illustrating this is shown for a three layer model below.\n",
"\n",
"<img src='res/fprop-bprop-block-diagram.png' />\n",
"\n",
"For a layer with parameters, the gradients with respect to the layer outputs are required to calculate gradients with respect to the layer parameters. Therefore by combining backward propagation of gradients through the model with computing the gradients with respect to parameters in the relevant layers, we can calculate gradients of the error function with respect to all of the parameters of a multiple-layer model in a very efficient manner. In fact the computational cost of computing gradients with respect to all of the parameters of the model using this method will only be a constant factor times the cost of calculating the model outputs in the forwards pass.\n",
"\n",
"So far, we have abstractly talked about calculating gradients with respect to the inputs of a layer using gradients with respect to the layer outputs. More concretely we will be using the chain rule for derivatives to do this, similarly to how we used the chain rule in exercise 4 of the previous notebook to calculate gradients with respect to the parameters of an affine layer given gradients with respect to the outputs of the layer.\n",
"\n",
"In particular if our layer has a batch of $B$ vector inputs each of dimension $D$, $\\left\\lbrace \\boldsymbol{x}^{(b)} \\right\\rbrace_{b=1}^B$, and produces a batch of $B$ vector outputs each of dimension $K$, $\\left\\lbrace \\boldsymbol{y}^{(b)}\\right\\rbrace_{b=1}^B$, then we can calculate the gradient with respect to the $d^\\textrm{th}$ dimension of the $b^{\\textrm{th}}$ input using the gradients with respect to the $b^{\\textrm{th}}$ output\n",
"\n",
"\\begin{equation}\n",
" \\frac{\\partial \\bar{E}}{\\partial x^{(b)}_d} = \n",
" \\sum_{k=1}^K \\left( \n",
" \\frac{\\partial \\bar{E}}{\\partial y^{(b)}_k} \\frac{\\partial y^{(b)}_k}{\\partial x^{(b)}_d} \n",
" \\right).\n",
"\\end{equation}\n",
"\n",
"The `bprop` method takes an array of gradients with respect to the outputs $\\frac{\\partial \\bar{E}}{\\partial y^{(b)}_k}$ and applies a sum-product operation with the partial derivatives of each output with respect to each input $\\frac{\\partial y^{(b)}_k}{\\partial x^{(b)}_d}$, producing gradients with respect to the inputs of the layer $\\frac{\\partial \\bar{E}}{\\partial x^{(b)}_d}$.\n",
"\n",
"For the affine transformation used in the `AffineLayer` implemented in lab 2, i.e. a forward propagation corresponding to \n",
"\n",
"\\begin{equation}\n",
" y^{(b)}_k = \\sum_{d=1}^D \\left( W_{kd} x^{(b)}_d \\right) + b_k\n",
"\\end{equation}\n",
"\n",
"then the corresponding partial derivatives of layer outputs with respect to inputs are\n",
"\n",
"\\begin{equation}\n",
" \\frac{\\partial y^{(b)}_k}{\\partial x^{(b)}_d} = W_{kd}\n",
"\\end{equation}\n",
"\n",
"and so the backwards-propagation method for the `AffineLayer` takes the following form\n",
"\n",
"\\begin{equation}\n",
" \\frac{\\partial \\bar{E}}{\\partial x^{(b)}_d} = \n",
" \\sum_{k=1}^K \\left( \\frac{\\partial \\bar{E}}{\\partial y^{(b)}_k} W_{kd} \\right).\n",
"\\end{equation}\n",
"\n",
"This can be efficiently implemented in NumPy using the `dot` function\n",
"\n",
"```python\n",
"class AffineLayer(LayerWithParameters):\n",
"\n",
" # ... [implementation of remaining methods from previous week] ...\n",
" \n",
" def bprop(self, inputs, outputs, grads_wrt_outputs):\n",
" return grads_wrt_outputs.dot(self.weights)\n",
"```\n",
"\n",
"An important special case applies when the outputs of a layer are an elementwise function of the inputs such that $y^{(b)}_k$ only depends on $x^{(b)}_d$ when $d = k$. In this case the partial derivatives $\\frac{\\partial y^{(b)}_k}{\\partial x^{(b)}_d}$ will be zero when $k \\neq d$ and the above summation reduces to a single term,\n",
"\n",
"\\begin{equation}\n",
" \\frac{\\partial \\bar{E}}{\\partial x^{(b)}_d} = \n",
" \\frac{\\partial \\bar{E}}{\\partial y^{(b)}_d} \\frac{\\partial y^{(b)}_d}{\\partial x^{(b)}_d}\n",
"\\end{equation}\n",
"\n",
"i.e. to calculate the gradient with respect to the $b^{\\textrm{th}}$ input vector we just perform an elementwise multiplication of the gradient with respect to the $b^{\\textrm{th}}$ output vector with the vector of derivatives of the outputs with respect to the inputs. This case applies to the `SigmoidLayer` and to all other layers applying an elementwise function to their inputs.\n",
"\n",
"For the logistic sigmoid layer we have that\n",
"\n",
"\\begin{equation}\n",
" y^{(b)}_d = \\frac{1}{1 + \\exp(-x^{(b)}_d)}\n",
" \\qquad\n",
" \\Rightarrow\n",
" \\qquad\n",
" \\frac{\\partial y^{(b)}_d}{\\partial x^{(b)}_d} = \n",
" \\frac{\\exp(-x^{(b)}_d)}{\\left[ 1 + \\exp(-x^{(b)}_d) \\right]^2} =\n",
" y^{(b)}_d \\left[ 1 - y^{(b)}_d \\right]\n",
"\\end{equation}\n",
"\n",
"which you should now be able relate to the implementation of `SigmoidLayer.bprop` given earlier:\n",
"\n",
"```python\n",
"class SigmoidLayer(Layer):\n",
"\n",
" def fprop(self, inputs):\n",
" return 1. / (1. + np.exp(-inputs))\n",
"\n",
" def bprop(self, inputs, outputs, grads_wrt_outputs):\n",
" return grads_wrt_outputs * outputs * (1. - outputs)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 1: training a softmax model on MNIST\n",
"\n",
"For this first exercise we will train a model consisting of an affine transformation plus softmax on a multiclass classification task: classifying the digit labels for handwritten digit images from the MNIST data set introduced in the first notebook.\n",
"\n",
"First run the cell below to import the necessary modules and classes and to load the MNIST data provider objects. As it takes a little while to load the MNIST data from disk into memory it is worth loading the data providers just once in a separate cell like this rather than recreating the objects for every training run.\n",
"\n",
"We are loading two data provider objects here - one corresponding to the training data set and a second to use as a *validation* data set. This is data we do not train the model on but measure the performance of the trained model on to assess its ability to *generalise* to unseen data. \n",
"\n",
"The concept of training, validation, and test data sets was introduced in lecture one, and the concept of generalisation is discussed in more detail in lecture five. As you will need to report both training and validation set performances in your experiments for the first coursework assignment we are providing code here to give an example of how to do this."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import logging\n",
"from mlp.layers import AffineLayer, SoftmaxLayer, SigmoidLayer\n",
"from mlp.errors import CrossEntropyError, CrossEntropySoftmaxError\n",
"from mlp.models import SingleLayerModel, MultipleLayerModel\n",
"from mlp.initialisers import UniformInit\n",
"from mlp.learning_rules import GradientDescentLearningRule\n",
"from mlp.data_providers import MNISTDataProvider\n",
"from mlp.optimisers import Optimiser\n",
"\n",
"plt.style.use('ggplot')\n",
"\n",
"# Seed a random number generator\n",
"seed = 6102016 \n",
"rng = np.random.RandomState(seed)\n",
"\n",
"# Set up a logger object to print info about the training run to stdout\n",
"logger = logging.getLogger()\n",
"logger.setLevel(logging.INFO)\n",
"logger.handlers = [logging.StreamHandler()]\n",
"\n",
"# Create data provider objects for the MNIST data set\n",
"train_data = MNISTDataProvider('train', rng=rng)\n",
"valid_data = MNISTDataProvider('valid', rng=rng)\n",
"input_dim, output_dim = 784, 10"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To minimise replication of code and allow you to run experiments more quickly, a helper function is provided below which trains a model and plots the evolution of the error and classification accuracy of the model (on both training and validation sets) over training."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def train_model_and_plot_stats(\n",
" model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval):\n",
"\n",
" # As well as monitoring the error over training also monitor classification\n",
" # accuracy i.e. proportion of most-probable predicted classes being equal to targets\n",
" data_monitors={'acc': lambda y, t: (y.argmax(-1) == t.argmax(-1)).mean()}\n",
"\n",
" # Use the created objects to initialise a new Optimiser instance.\n",
" optimiser = Optimiser(\n",
" model, error, learning_rule, train_data, valid_data, data_monitors)\n",
"\n",
" # Run the optimiser for 5 epochs (full passes through the training set)\n",
" # printing statistics every epoch.\n",
" stats, keys, run_time = optimiser.train(num_epochs=num_epochs, stats_interval=stats_interval)\n",
"\n",
" # Plot the change in the validation and training set error over training.\n",
" fig_1 = plt.figure(figsize=(8, 4))\n",
" ax_1 = fig_1.add_subplot(111)\n",
" for k in ['error(train)', 'error(valid)']:\n",
" ax_1.plot(np.arange(1, stats.shape[0]) * stats_interval, \n",
" stats[1:, keys[k]], label=k)\n",
" ax_1.legend(loc=0)\n",
" ax_1.set_xlabel('Epoch number')\n",
"\n",
" # Plot the change in the validation and training set accuracy over training.\n",
" fig_2 = plt.figure(figsize=(8, 4))\n",
" ax_2 = fig_2.add_subplot(111)\n",
" for k in ['acc(train)', 'acc(valid)']:\n",
" ax_2.plot(np.arange(1, stats.shape[0]) * stats_interval, \n",
" stats[1:, keys[k]], label=k)\n",
" ax_2.legend(loc=0)\n",
" ax_2.set_xlabel('Epoch number')\n",
" \n",
" return stats, keys, run_time, fig_1, ax_1, fig_2, ax_2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Running the cell below will create a model consisting of an affine layer followed by a softmax transformation and train it on the MNIST data set by minimising the multi-class cross entropy error function using a basic gradient descent learning rule. By using the helper function defined above, at the end of training, the evolution of the error function and also classification accuracy of the model over the training epochs will be plotted.\n",
"\n",
"**Your Tasks:**\n",
"- Try running the code for various settings of the training hyperparameters defined at the beginning of the cell to get a feel for how these affect how training proceeds. You may wish to create multiple copies of the cell below to allow you to keep track of and compare the results across different hyperparameter settings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set training run hyperparameters\n",
"batch_size = 100 # number of data points in a batch\n",
"init_scale = 0.01 # scale for random parameter initialisation\n",
"learning_rate = 0.1 # learning rate for gradient descent\n",
"num_epochs = 100 # number of training epochs to perform\n",
"stats_interval = 5 # epoch interval between recording and printing stats\n",
"\n",
"# Reset random number generator and data provider states on each run\n",
"# to ensure reproducibility of results\n",
"rng.seed(seed)\n",
"train_data.reset()\n",
"valid_data.reset()\n",
"\n",
"# Alter data-provider batch size\n",
"train_data.batch_size = batch_size \n",
"valid_data.batch_size = batch_size\n",
"\n",
"# Create a parameter initialiser which will sample random uniform values\n",
"# from [-init_scale, init_scale]\n",
"param_init = UniformInit(-init_scale, init_scale, rng=rng)\n",
"\n",
"# Create affine + softmax model\n",
"model = MultipleLayerModel([\n",
" AffineLayer(input_dim, output_dim, param_init, param_init),\n",
" SoftmaxLayer()\n",
"])\n",
"\n",
"# Initialise a cross entropy error object\n",
"error = CrossEntropyError()\n",
"\n",
"# Use a basic gradient descent learning rule\n",
"learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)\n",
"\n",
"_ = train_model_and_plot_stats(\n",
" model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Optional extra: more efficient softmax gradient evaluation\n",
"\n",
"In the lectures you were shown that for certain combinations of error function and final output layers, that the expressions for the gradients take particularly simple forms. \n",
"\n",
"In particular it can be shown that the combinations of \n",
"\n",
" * logistic sigmoid output layer and binary cross entropy error function\n",
" * softmax output layer and cross entropy error function\n",
" \n",
"lead to particularly simple forms for the gradients of the error function with respect to the inputs to the final layer. In particular for the latter softmax and cross entropy error function case we have that\n",
"\n",
"\\begin{equation}\n",
" y^{(b)}_k = \\textrm{Softmax}_k\\lpa\\vct{x}^{(b)}\\rpa = \\frac{\\exp(x^{(b)}_k)}{\\sum_{d=1}^D \\lbr \\exp(x^{(b)}_d) \\rbr}\n",
" \\qquad\n",
" E^{(b)} = \\textrm{CrossEntropy}\\lpa\\vct{y}^{(b)},\\,\\vct{t}^{(b)}\\rpa = -\\sum_{d=1}^D \\lbr t^{(b)}_d \\log(y^{(b)}_d) \\rbr\n",
"\\end{equation}\n",
"\n",
"and it can be shown (this is an instructive mathematical exercise if you want a challenge!) that\n",
"\n",
"\\begin{equation}\n",
" \\pd{E^{(b)}}{x^{(b)}_d} = y^{(b)}_d - t^{(b)}_d.\n",
"\\end{equation}\n",
"\n",
"\n",
"The combination of `CrossEntropyError` and `SoftmaxLayer` used to train the model above calculate this gradient less directly by first calculating the gradient of the error with respect to the model outputs in `CrossEntropyError.grad` and then back-propagating this gradient to the inputs of the softmax layer using `SoftmaxLayer.bprop`.\n",
"\n",
"Rather than computing the gradient in two steps like this we can instead wrap the softmax transformation in to the definition of the error function and make use of the simpler gradient expression above. More explicitly we define an error function as follows\n",
"\n",
"\\begin{equation}\n",
" E^{(b)} = \\textrm{CrossEntropySoftmax}\\lpa\\vct{y}^{(b)},\\,\\vct{t}^{(b)}\\rpa = -\\sum_{d=1}^D \\lbr t^{(b)}_d \\log\\lsb\\textrm{Softmax}_d\\lpa \\vct{y}^{(b)}\\rpa\\rsb\\rbr\n",
"\\end{equation}\n",
"\n",
"with corresponding gradient\n",
"\n",
"\\begin{equation}\n",
" \\pd{E^{(b)}}{y^{(b)}_d} = \\textrm{Softmax}_d\\lpa \\vct{y}^{(b)}\\rpa - t^{(b)}_d.\n",
"\\end{equation}\n",
"\n",
"The final layer of the model will then be an affine transformation which produces unbounded output values corresponding to the logarithms of the unnormalised predicted class probabilities. An implementation of this error function is provided in `CrossEntropySoftmaxError`. The cell below sets up a model with a single affine transformation layer and trains it on MNIST using this new cost. If you run it with equivalent hyperparameters to one of your runs with the alternative formulation above you should get identical error and classification curves (other than floating point error) but with a minor improvement in training speed.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set training run hyperparameters\n",
"batch_size = 100 # number of data points in a batch\n",
"init_scale = 0.1 # scale for random parameter initialisation\n",
"learning_rate = 0.1 # learning rate for gradient descent\n",
"num_epochs = 100 # number of training epochs to perform\n",
"stats_interval = 5 # epoch interval between recording and printing stats\n",
"\n",
"# Reset random number generator and data provider states on each run\n",
"# to ensure reproducibility of results\n",
"rng.seed(seed)\n",
"train_data.reset()\n",
"valid_data.reset()\n",
"\n",
"# Alter data-provider batch size\n",
"train_data.batch_size = batch_size \n",
"valid_data.batch_size = batch_size\n",
"\n",
"# Create a parameter initialiser which will sample random uniform values\n",
"# from [-init_scale, init_scale]\n",
"param_init = UniformInit(-init_scale, init_scale, rng=rng)\n",
"\n",
"# Create affine model (outputs are logs of unnormalised class probabilities)\n",
"model = SingleLayerModel(\n",
" AffineLayer(input_dim, output_dim, param_init, param_init)\n",
")\n",
"\n",
"# Initialise the error object\n",
"error = CrossEntropySoftmaxError()\n",
"\n",
"# Use a basic gradient descent learning rule\n",
"learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)\n",
"\n",
"_ = train_model_and_plot_stats(\n",
" model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 2: training deeper models on MNIST\n",
"\n",
"We are now going to investigate using deeper multiple-layer model archictures for the MNIST classification task. You should experiment with training models with two to five `AffineLayer` transformations interleaved with `SigmoidLayer` nonlinear transformations. Intermediate hidden layers between the input and output should have a dimension of 100. For example the `layers` definition of a model with two `AffineLayer` transformations would be\n",
"\n",
"```python\n",
"layers = [\n",
" AffineLayer(input_dim, 100),\n",
" SigmoidLayer(),\n",
" AffineLayer(100, output_dim),\n",
" SoftmaxLayer()\n",
"]\n",
"```\n",
"\n",
"If you read through the extension to the first exercise you may wish to use the `CrossEntropySoftmaxError` without the final `SoftmaxLayer`.\n",
"\n",
"**Your Tasks:**\n",
"- Use the code from the first exercise as a starting point to train models of varying depths, and compare their results. It is a good idea to start with training hyperparameters which gave reasonable performance for the shallow architecture trained previously.\n",
"\n",
"Some questions to investigate:\n",
"\n",
" 1. How does increasing the number of layers affect the model's performance on the training data set? And on the validation data set?\n",
" 2. Do deeper models seem to be harder or easier to train (e.g. in terms of ease of choosing training hyperparameters to give good final performance and/or quick convergence)?\n",
" 3. Do the models seem to be sensitive to the choice of the parameter initialisation range? Can you think of any reasons for why setting individual parameter initialisation scales for each `AffineLayer` in a model might be useful? Can you come up with (or find) any heuristics for setting the parameter initialisation scales?\n",
" \n",
"You do not need to come up with explanations for all of these (though if you can that's great!), they are meant as prompts to get you thinking about the various issues involved in training multiple-layer models. \n",
"\n",
"You may wish to start with shorter pilot training runs (by decreasing the number of training epochs) for each of the model architectures to get an initial idea of appropriate hyperparameter settings before doing one or two longer training runs to assess the final performance of the architectures."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"# disable logging by setting handler to dummy object\n",
"logger.handlers = [logging.NullHandler()]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Models with two affine layers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Models with three affine layers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Models with four affine layers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Models with five affine layers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 3: Hyperbolic tangent and rectified linear layers\n",
"\n",
"In the models we have been investigating so far, we have been applying elementwise logistic sigmoid transformations to the outputs of intermediate (affine) layers. The logistic sigmoid is just one particular choice of an elementwise non-linearity we can use. \n",
"\n",
"As discussed in lecture 3, although logistic sigmoid has some favourable properties in terms of interpretability, there are also disadvantages from a computational perspective. In particular:\n",
"1. the gradients of the sigmoid become close to zero (and may actually become zero because of finite numerical precision) for large positive or negative inputs, \n",
"2. the outputs are non-centred - they cover the interval $[0,\\,1]$ so negative outputs are never produced.\n",
"\n",
"Two alternative elementwise non-linearities which are often used in multiple layer models are the hyperbolic tangent (tanh) and the rectified linear function (ReLU).\n",
"\n",
"For tanh (`TanhLayer`) layer the forward propagation corresponds to\n",
"\n",
"\\begin{equation}\n",
" y^{(b)}_k = \n",
" \\tanh\\left(x^{(b)}_k\\right) = \n",
" \\frac{\\exp\\left(x^{(b)}_k\\right) - \\exp\\left(-x^{(b)}_k\\right)}{\\exp\\left(x^{(b)}_k\\right) + \\exp\\left(-x^{(b)}_k\\right)}\n",
"\\end{equation}\n",
"\n",
"which has corresponding partial derivatives\n",
"\n",
"\\begin{equation}\n",
" \\frac{\\partial y^{(b)}_k}{\\partial x^{(b)}_d} = \n",
" \\begin{cases} \n",
" 1 - \\left(y^{(b)}_k\\right)^2 & \\quad k = d \\\\\n",
" 0 & \\quad k \\neq d\n",
" \\end{cases}.\n",
"\\end{equation}\n",
"\n",
"For a ReLU (`ReluLayer`) the forward propagation corresponds to\n",
"\n",
"\\begin{equation}\n",
" y^{(b)}_k = \n",
" \\max\\left(0,\\,x^{(b)}_k\\right)\n",
"\\end{equation}\n",
"\n",
"which has corresponding partial derivatives\n",
"\n",
"\\begin{equation}\n",
" \\frac{\\partial y^{(b)}_k}{\\partial x^{(b)}_d} = \n",
" \\begin{cases} \n",
" 1 & \\quad k = d \\quad\\textrm{and} &x^{(b)}_d > 0 \\\\\n",
" 0 & \\quad k \\neq d \\quad\\textrm{or} &x^{(b)}_d < 0\n",
" \\end{cases}.\n",
"\\end{equation}\n",
"\n",
"**Your Tasks:**\n",
"- Using these definitions implement the `fprop` and `bprop` methods for the skeleton `TanhLayer` and `ReluLayer` class definitions below."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"from mlp.layers import Layer\n",
"\n",
"class TanhLayer(Layer):\n",
" \"\"\"Layer implementing an element-wise hyperbolic tangent transformation.\"\"\"\n",
"\n",
" def fprop(self, inputs):\n",
" \"\"\"Forward propagates activations through the layer transformation.\n",
"\n",
" For inputs `x` and outputs `y` this corresponds to `y = tanh(x)`.\n",
" \"\"\"\n",
" raise NotImplementedError(\"TODO Implement this function\")\n",
"\n",
" def bprop(self, inputs, outputs, grads_wrt_outputs):\n",
" \"\"\"Back propagates gradients through a layer.\n",
"\n",
" Given gradients with respect to the outputs of the layer calculates the\n",
" gradients with respect to the layer inputs.\n",
" \"\"\"\n",
" raise NotImplementedError(\"TODO Implement this function\")\n",
"\n",
" def __repr__(self):\n",
" return 'TanhLayer'\n",
" \n",
"\n",
"class ReluLayer(Layer):\n",
" \"\"\"Layer implementing an element-wise rectified linear transformation.\"\"\"\n",
"\n",
" def fprop(self, inputs):\n",
" \"\"\"Forward propagates activations through the layer transformation.\n",
"\n",
" For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`.\n",
" \"\"\"\n",
" raise NotImplementedError(\"TODO Implement this function\")\n",
"\n",
" def bprop(self, inputs, outputs, grads_wrt_outputs):\n",
" \"\"\"Back propagates gradients through a layer.\n",
"\n",
" Given gradients with respect to the outputs of the layer calculates the\n",
" gradients with respect to the layer inputs.\n",
" \"\"\"\n",
" raise NotImplementedError(\"TODO Implement this function\")\n",
"\n",
" def __repr__(self):\n",
" return 'ReluLayer'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Test your implementations by running the cells below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_inputs = np.array([[0.1, -0.2, 0.3], [-0.4, 0.5, -0.6]])\n",
"test_grads_wrt_outputs = np.array([[5., 10., -10.], [-5., 0., 10.]])\n",
"test_tanh_outputs = np.array(\n",
" [[ 0.09966799, -0.19737532, 0.29131261],\n",
" [-0.37994896, 0.46211716, -0.53704957]])\n",
"test_tanh_grads_wrt_inputs = np.array(\n",
" [[ 4.95033145, 9.61042983, -9.15136962],\n",
" [-4.27819393, 0., 7.11577763]])\n",
"tanh_layer = TanhLayer()\n",
"tanh_outputs = tanh_layer.fprop(test_inputs)\n",
"all_correct = True\n",
"if not tanh_outputs.shape == test_tanh_outputs.shape:\n",
" print('TanhLayer.fprop returned array with wrong shape.')\n",
" all_correct = False\n",
"elif not np.allclose(test_tanh_outputs, tanh_outputs):\n",
" print('TanhLayer.fprop calculated incorrect outputs.')\n",
" all_correct = False\n",
"tanh_grads_wrt_inputs = tanh_layer.bprop(\n",
" test_inputs, tanh_outputs, test_grads_wrt_outputs)\n",
"if not tanh_grads_wrt_inputs.shape == test_tanh_grads_wrt_inputs.shape:\n",
" print('TanhLayer.bprop returned array with wrong shape.')\n",
" all_correct = False\n",
"elif not np.allclose(tanh_grads_wrt_inputs, test_tanh_grads_wrt_inputs):\n",
" print('TanhLayer.bprop calculated incorrect gradients with respect to inputs.')\n",
" all_correct = False\n",
"if all_correct:\n",
" print('Outputs and gradients calculated correctly for TanhLayer.')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_inputs = np.array([[0.1, -0.2, 0.3], [-0.4, 0.5, -0.6]])\n",
"test_grads_wrt_outputs = np.array([[5., 10., -10.], [-5., 0., 10.]])\n",
"test_relu_outputs = np.array([[0.1, 0., 0.3], [0., 0.5, 0.]])\n",
"test_relu_grads_wrt_inputs = np.array([[5., 0., -10.], [-0., 0., 0.]])\n",
"relu_layer = ReluLayer()\n",
"relu_outputs = relu_layer.fprop(test_inputs)\n",
"all_correct = True\n",
"if not relu_outputs.shape == test_relu_outputs.shape:\n",
" print('ReluLayer.fprop returned array with wrong shape.')\n",
" all_correct = False\n",
"elif not np.allclose(test_relu_outputs, relu_outputs):\n",
" print('ReluLayer.fprop calculated incorrect outputs.')\n",
" all_correct = False\n",
"relu_grads_wrt_inputs = relu_layer.bprop(\n",
" test_inputs, relu_outputs, test_grads_wrt_outputs)\n",
"if not relu_grads_wrt_inputs.shape == test_relu_grads_wrt_inputs.shape:\n",
" print('ReluLayer.bprop returned array with wrong shape.')\n",
" all_correct = False\n",
"elif not np.allclose(relu_grads_wrt_inputs, test_relu_grads_wrt_inputs):\n",
" print('ReluLayer.bprop calculated incorrect gradients with respect to inputs.')\n",
" all_correct = False\n",
"if all_correct:\n",
" print('Outputs and gradients calculated correctly for ReluLayer.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# PyTorch\n",
"\n",
"In this section we will builld on we learned in the previous lab and will use PyTorch to build a multi-layer model for the MNIST classification task. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch.nn as nn\n",
"import torch.optim as optim\n",
"from torchvision import datasets,transforms\n",
"from torch.utils.data.sampler import SubsetRandomSampler\n",
"\n",
"torch.manual_seed(seed)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Neural networks are typically take a long time to converge. This process can be sped up by using a GPU. If you have a GPU available, you can use it by setting the `device` variable below to `cuda`. If you do not have a GPU available, you can still run the code on the CPU by setting `device` to `cpu`.\n",
"\n",
"When training, both the model and the data should be on the same device. The `to` method can be used to move a tensor to a device. For example, `x = x.to(device)` will move the tensor `x` to the device specified by `device`. Look through the code to see where we put the model and the data on the CPU or GPU device."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Device configuration\n",
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
"device"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"# Set training run hyperparameters\n",
"batch_size = 128 # number of data points in a batch\n",
"learning_rate = 0.001 # learning rate for gradient descent\n",
"num_epochs = 50 # number of training epochs to perform\n",
"stats_interval = 5 # epoch interval between recording and printing stats"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [transforms](https://pytorch.org/vision/0.9/transforms.html) are common transformations for the image datasets. We will use the `Compose` transform to combine the `ToTensor` and `Normalize` transforms. The `ToTensor` transform converts the image to a tensor and the `Normalize` transform normalizes the image by subtracting the mean and dividing by the standard deviation. The `Normalize` transform takes two arguments: the mean and the standard deviation. The mean and standard deviation are calculated for each channel. The mean and standard deviation for the MNIST dataset are $0.1307$ and $0.3081$ respectively. `Normalize` transform is particularly useful when there is a big discrepancy between the values of the pixels in an image.\n",
"\n",
"When working with images, transforms are used to augment the dataset (i.e. create artificial images based on existing ones). This is done to increase the size of the dataset and to make the model more robust to changes in the input images. An illustration of how transforms affect an image is shown [here](https://pytorch.org/vision/0.11/auto_examples/plot_transforms.html#sphx-glr-download-auto-examples-plot-transforms-py)."
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"transform=transforms.Compose([\n",
" transforms.ToTensor(),\n",
" transforms.Normalize((0.1307,), (0.3081,))\n",
" ])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Popular machine learning datasets are available in the [`torchvision.datasets`](https://pytorch.org/vision/0.15/datasets.html) module. The `MNIST` dataset is available in the [`torchvision.datasets.MNIST`](https://pytorch.org/vision/0.15/generated/torchvision.datasets.MNIST.html) class. This way, we can download the dataset directly from the PyTorch library into the `data` folder of our repository. \n",
"\n",
"The `MNIST` class takes the following arguments:\n",
"- `root`: the path where the dataset will be stored\n",
"- `train`: if `True`, the training set is returned, otherwise the test set is returned\n",
"- `download`: if `True`, the dataset is downloaded from the internet and put in `root`. If the dataset is already downloaded, it is not downloaded again\n",
"- `transform`: the transform to be applied to the dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_dataset = datasets.MNIST('../data', train=True, download=True, transform=transform)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"test_dataset = datasets.MNIST('../data', train=False, download=True, transform=transform)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that the training set has $60,000$ images and the test set has $10,000$ images. Each image is a $28 \\times 28$ grayscale image. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Train dataset: \\n\", train_dataset)\n",
"print(train_dataset.data.size())\n",
"print(train_dataset.targets.size())\n",
"print(\"\\nTest dataset: \\n\", test_dataset)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.imshow(train_dataset.data[42], cmap='gray')\n",
"plt.title('%i' % train_dataset.targets[42])\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, since we want to evaluate the performance of our model during training, we need a validation set. We can create a validation set by splitting the training set into two parts. As a general rule, the validation set should be $10-20\\%$ of the training set. The `SubsetRandomSampler` class can be used to create a subset of the training set. The `SubsetRandomSampler` class takes a list of randomly shuffled indices as an argument and selects a subset of the training set based on these indices.\n",
"\n",
"*Why would we want to randomly shuffle the data when creating the separate training and validation set?*\n",
"\n",
"*We could just take the first 80% of data points and assign them to the training set and the last 20% of data points to the validation set. When and why would this be a bad practice?*\n",
"\n",
"*Why do we want to shuffle the training and valisation sets but not the test set (see `shuffle=False`)?*"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"valid_size=0.2 # Leave 20% of training set as validation set\n",
"num_train = len(train_dataset)\n",
"indices = list(range(num_train))\n",
"split = int(np.floor(valid_size * num_train))\n",
"np.random.shuffle(indices) # Shuffle indices in-place\n",
"train_idx, valid_idx = indices[split:], indices[:split] # Split indices into training and validation sets\n",
"train_sampler = SubsetRandomSampler(train_idx)\n",
"valid_sampler = SubsetRandomSampler(valid_idx)\n",
"\n",
"# Create the dataloaders\n",
"train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, sampler=train_sampler, pin_memory=True)\n",
"valid_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, sampler=valid_sampler, pin_memory=True)\n",
"test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, pin_memory=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create a multy-layer model, we will use the [`nn.Sequential`](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html) container. The `nn.Sequential` container takes a list of layers as an argument and applies them sequentially. The `nn.Sequential` container is a convenient way to create a model with multiple layers. However, it is not very flexible. For example, we cannot have skip connections in a model created using the `nn.Sequential` container.\n",
"\n",
"Since we are working with images, we will have to flatten the images before passing them to the model. We can do this using the [`nn.Flatten`](https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html) layer. The `nn.Flatten` layer takes a tensor of shape `(N, C, H, W)` and flattens it to a tensor of shape `(N, C*H*W)`.\n",
"\n",
"In between the affine layers, we will use the [`nn.ReLU`](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html) activation function. The `nn.ReLU` activation function applies the ReLU function elementwise to the input tensor. There are other acrtivation functions available in PyTorch, such as the [`nn.Tanh`](https://pytorch.org/docs/stable/generated/torch.nn.Tanh.html) activation function, the [`nn.Sigmoid`](https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html) activation function, and the [`nn.LeakyReLU`](https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html) activation function. The `nn.LeakyReLU` activation function is similar to the `nn.ReLU` activation function, but it allows a small gradient when the input is negative. This can help with training."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"class MultipleLayerModel(nn.Module):\n",
" \"\"\"Multiple layer model.\"\"\"\n",
" def __init__(self, input_dim, output_dim, hidden_dim):\n",
" super().__init__()\n",
" self.flatten = nn.Flatten()\n",
" self.linear_relu_stack = nn.Sequential(\n",
" nn.Linear(input_dim, hidden_dim),\n",
" nn.ReLU(),\n",
" nn.Linear(hidden_dim, hidden_dim),\n",
" nn.ReLU(),\n",
" nn.Linear(hidden_dim, output_dim),\n",
" )\n",
" \n",
" def forward(self, x):\n",
" x = self.flatten(x)\n",
" logits = self.linear_relu_stack(x)\n",
" return logits"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since our image size is $1 \\times 28 \\times 28$, this will be the output size of the `nn.Flatten` layer. The input size of the first affine layer that will take in $x_i$ datapoints will be $1 \\times 28 \\times 28$ and the output size will be $100$. The input size of the second affine layer will be $100$ and the output size will be $10$. The output size of the second affine layer is 10 because we have $10$ classes in the MNIST dataset. Therefore, the last layer will out put the a vector $y = (y_1, \\dots, y_{K})^{\\top}$ where $y_k$ is the probability that the image belongs to class $k$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_dim = 1*28*28\n",
"output_dim = 10\n",
"hidden_dim = 100\n",
"\n",
"model = MultipleLayerModel(input_dim, output_dim, hidden_dim).to(device)\n",
"print(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we want to calissify images by their labels, we will use the [`nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) loss function. The `nn.CrossEntropyLoss` loss function combines the softmax function and the cross entropy loss function. The `nn.CrossEntropyLoss` loss function takes the logits as an input and returns the loss. The logits are the outputs of the last affine layer before the softmax function is applied. The `nn.CrossEntropyLoss` loss function is equivalent to applying the softmax function to the logits and then applying the cross entropy loss function to the softmax outputs and the labels."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"loss = nn.CrossEntropyLoss()\n",
"optimizer = optim.Adam(model.parameters(), lr=learning_rate) # Adam optimiser"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, our training loop will combine a training and an evaluation pass per epoch. During the training pass, we propagate the training data through the model and calculate the loss. Then, we calculate the gradients of the loss with respect to the parameters of the model and update the parameters of the model. During the evaluation pass, we propagate the validation data through the model and calculate the loss and the accuracy. We do not calculate the gradients of the loss with respect to the parameters of the model and we do not update the parameters of the model. \n",
"\n",
"*What would happen if we do?*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Keep track of the loss values over training\n",
"train_loss = [] \n",
"valid_loss = []\n",
"\n",
"# Keep track of the accuracy values over training\n",
"train_acc = []\n",
"valid_acc = []\n",
"\n",
"for i in range(num_epochs+1):\n",
" # Training\n",
" model.train()\n",
" batch_loss = []\n",
" batch_acc = []\n",
" for batch_idx, (x, t) in enumerate(train_loader):\n",
" x = x.to(device)\n",
" t = t.to(device)\n",
" \n",
" # Forward pass\n",
" y = model(x)\n",
" E_value = loss(y, t)\n",
" \n",
" # Backward pass\n",
" optimizer.zero_grad()\n",
" E_value.backward()\n",
" optimizer.step()\n",
" \n",
" # Calculate accuracy\n",
" _, argmax = torch.max(y, 1)\n",
" acc = (t == argmax.squeeze()).float().mean()\n",
" \n",
" # Logging\n",
" batch_loss.append(E_value.item())\n",
" batch_acc.append(acc.item())\n",
" \n",
" train_loss.append(np.mean(batch_loss))\n",
" train_acc.append(np.mean(batch_acc))\n",
"\n",
" # Validation\n",
" model.eval()\n",
" batch_loss = []\n",
" batch_acc = []\n",
" for batch_idx, (x, t) in enumerate(valid_loader):\n",
" x = x.to(device)\n",
" t = t.to(device)\n",
" \n",
" # Forward pass\n",
" y = model(x)\n",
" E_value = loss(y, t)\n",
" \n",
" # Calculate accuracy\n",
" _, argmax = torch.max(y, 1)\n",
" acc = (t == argmax.squeeze()).float().mean()\n",
" \n",
" # Logging\n",
" batch_loss.append(E_value.item())\n",
" batch_acc.append(acc.item())\n",
" \n",
" valid_loss.append(np.mean(batch_loss))\n",
" valid_acc.append(np.mean(batch_acc))\n",
"\n",
" if i % stats_interval == 0:\n",
" print('Epoch: {} \\tError(train): {:.6f} \\tAccuracy(train): {:.6f} \\tError(valid): {:.6f} \\tAccuracy(valid): {:.6f}'.format(\n",
" i, train_loss[-1], train_acc[-1], valid_loss[-1], valid_acc[-1]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see below the evolution of our training and validation losses, as well as, the respective accuracies. We can see that the training loss decreases and the training accuracy increases with each epoch. However, the validation loss starts increasing after $10$ epochs and the validation accuracy increases only up to a certain point. \n",
"\n",
"*What could be happening here?*\n",
"\n",
"*Is training for 50 epoch a sensible choice?* \n",
"\n",
"*What number of epochs would be a better choice and why?* "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Plot the change in the validation and training set error over training.\n",
"fig_1 = plt.figure(figsize=(8, 4))\n",
"ax_1 = fig_1.add_subplot(111)\n",
"ax_1.plot(train_loss, label='Error(train)')\n",
"ax_1.plot(valid_loss, label='Error(valid)')\n",
"ax_1.legend(loc=0)\n",
"ax_1.set_xlabel('Epoch number')\n",
"\n",
"# Plot the change in the validation and training set accuracy over training.\n",
"fig_2 = plt.figure(figsize=(8, 4))\n",
"ax_2 = fig_2.add_subplot(111)\n",
"ax_2.plot(train_acc, label='Accuracy(train)')\n",
"ax_2.plot(valid_acc, label='Accuracy(valid)')\n",
"ax_2.legend(loc=0)\n",
"ax_2.set_xlabel('Epoch number')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once our model is training and we are satisfied with the results, we can test its performance on unseen data. We can do this by propagating the test data through the model and calculating the accuracy. We can see that the test accuracy is similar to the validation accuracy.\n",
"\n",
"*Altought using a test set is not necessary for training, why is it important to have a one?*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Testing\n",
"test_acc = []\n",
"model.eval()\n",
"for batch_idx, (x, t) in enumerate(test_loader):\n",
" x = x.to(device)\n",
" t = t.to(device)\n",
"\n",
" # Forward pass\n",
" y = model(x)\n",
" \n",
" # Calculate accuracy\n",
" _, argmax = torch.max(y, 1)\n",
" acc = (t == argmax.squeeze()).float().mean()\n",
" \n",
" test_acc.append(acc.item())\n",
"test_acc = np.mean(test_acc)\n",
"print('Accuracy(test): {:.6f}'.format(test_acc))"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}