1125 lines
55 KiB
Plaintext
1125 lines
55 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Single layer models\n",
|
|
"\n",
|
|
"The objective of this lab is to implement a single-layer network model consisting of solely of an affine transformation of the inputs. The relevant material for this is covered in slides 12-23 of the first lecture.\n",
|
|
"\n",
|
|
"We will first implement the forward propagation of inputs to the network to produce predicted outputs. We will then move on to considering how to use gradients of an error function evaluated on the outputs to compute the gradients with respect to the model parameters to allow us to perform an iterative gradient-descent training procedure. In the final exercise you will use an interactive visualisation to explore the role of some of the different hyperparameters of gradient-descent based training methods.\n",
|
|
"\n",
|
|
"#### A note on random number generators\n",
|
|
"\n",
|
|
"It is generally a good practice (for machine learning applications **not** for cryptography!) to seed a pseudo-random number generator once at the beginning of each experiment. This makes it easier to reproduce results as the same random draws will produced each time the experiment is run (e.g. the same random initialisations used for parameters). Therefore generally when we need to generate random values during this course, we will create a seeded random number generator object as we do in the cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import numpy as np\n",
|
|
"seed = 27092016 \n",
|
|
"rng = np.random.RandomState(seed)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exercise 1: linear and affine transforms\n",
|
|
"\n",
|
|
"Any *linear transform* (also called a linear map) on a finite-dimensional vector space can be parametrised by a matrix. For example if we consider $\\boldsymbol{x} \\in \\mathbb{R}^{D}$ as the input space of a model with $D$ dimensional real-valued inputs, then a matrix $\\mathbf{W} \\in \\mathbb{R}^{K\\times D}$ can be used to define a prediction model consisting solely of a linear transform of the inputs\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\boldsymbol{y} = \\mathbf{W} \\boldsymbol{x}\n",
|
|
" \\qquad\n",
|
|
" \\Leftrightarrow\n",
|
|
" \\qquad\n",
|
|
" y_k = \\sum_{d=1}^D \\left( W_{kd} x_d \\right) \\quad \\forall k \\in \\left\\lbrace 1 \\dots K\\right\\rbrace\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"with $\\boldsymbol{y} \\in \\mathbb{R}^K$ the $K$-dimensional real-valued output of the model. Geometrically we can think of a linear transform as some combination of rotation, scaling, reflection and shearing of the input.\n",
|
|
"\n",
|
|
"An *affine transform* consists of a linear transform plus an additional translation on the output space parameterised by a vector $\\boldsymbol{b} \\in \\mathbb{R}^K$. A model consisting of an affine transformation of the inputs can then be defined as\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\boldsymbol{y} = \\mathbf{W}\\boldsymbol{x} + \\boldsymbol{b}\n",
|
|
" \\qquad\n",
|
|
" \\Leftrightarrow\n",
|
|
" \\qquad\n",
|
|
" y_k = \\sum_{d=1}^D \\left( W_{kd} x_d \\right) + b_k \\quad \\forall k \\in \\left\\lbrace 1 \\dots K\\right\\rbrace\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"In machine learning we will usually refer to the matrix $\\mathbf{W}$ as a *weight matrix* and the vector $\\boldsymbol{b}$ as a *bias vector*.\n",
|
|
"\n",
|
|
"Generally, rather than working with a single data vector $\\boldsymbol{x}$ we will work with batches of datapoints $\\left\\lbrace \\boldsymbol{x}^{(b)}\\right\\rbrace_{b=1}^B$. We could calculate the outputs for each input in the batch sequentially\n",
|
|
"\n",
|
|
"\\begin{align}\n",
|
|
" \\boldsymbol{y}^{(1)} &= \\mathbf{W}\\boldsymbol{x}^{(1)} + \\boldsymbol{b}\\\\\n",
|
|
" \\boldsymbol{y}^{(2)} &= \\mathbf{W}\\boldsymbol{x}^{(2)} + \\boldsymbol{b}\\\\\n",
|
|
" \\dots &\\\\\n",
|
|
" \\boldsymbol{y}^{(B)} &= \\mathbf{W}\\boldsymbol{x}^{(B)} + \\boldsymbol{b}\\\\\n",
|
|
"\\end{align}\n",
|
|
"\n",
|
|
"by looping over each input in the batch and calculating the output. However, loops in Python are slow (particularly compared to compiled and typed languages such as C). This is due at least in part to the large overhead in dynamically inferring variable types. In consequence, we want to avoid having loops in which this overhead would be the dominant computational cost.\n",
|
|
"\n",
|
|
"For array-based numerical operations, one way of overcoming this bottleneck is to *vectorise* operations, that is, computing all of them at once. NumPy `ndarrays` are typed arrays for which operations, like basic elementwise arithmetic and linear algebra operations (*e.g.* computing matrix-matrix or matrix-vector products) are implemented by calls to highly-optimised compiled libraries. Therefore, implementing code directly using NumPy operations on arrays rather than by looping over array elements usually leads to very substantial performance gains.\n",
|
|
"\n",
|
|
"As a simple example, we can consider adding up two arrays `a` and `b` and writing the result to a third array `c`. Let us start by initialising `a` and `b` with arbitrary values by running the cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"size = 1000\n",
|
|
"a = np.random.randn(size)\n",
|
|
"b = np.random.randn(size)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now, we are going to measure how long it takes to add up each pair of values in the two array and write the results to a third array using a loop-based implementation. We will use the `%%timeit` magic briefly mentioned in the previous lab notebook, specifying the number of times to loop the code as 100 and repeating it 3 times for better consistency. Run the cell below to get a print out of the average time taken."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%%timeit -n 100 -r 3\n",
|
|
"c = np.empty(size)\n",
|
|
"for i in range(size):\n",
|
|
" c[i] = a[i] + b[i]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And now we will perform the corresponding summation with the overloaded addition operator of NumPy arrays. Again run the cell below to get a print out of the average time taken."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%%timeit -n 100 -r 3\n",
|
|
"c = a + b"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The first loop-based implementation should have taken on the order of milliseconds ( $10^{-3}$ s) while the vectorised implementation should have taken on the order of microseconds ( $10^{-6}$ s), i.e. a $\\sim1000\\times$ speedup. Hopefully this simple example should make it clear why we want to vectorise operations whenever possible!\n",
|
|
"\n",
|
|
"Getting back to our affine model, ideally rather than individually computing the output corresponding to each input we should compute the outputs for all inputs in a batch using a vectorised implementation. As you saw last week, data providers return batches of inputs as arrays of shape `(batch_size, input_dim)`. In the mathematical notation used earlier we can consider the input as a matrix $\\mathbf{X}$ of dimensionality $B \\times D$:\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\mathbf{X} = \\left[ \\boldsymbol{x}^{(1)} ~ \\boldsymbol{x}^{(2)} ~ \\dots ~ \\boldsymbol{x}^{(B)} \\right]^\\mathrm{T}\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"i.e. the $b^{\\textrm{th}}$ input vector $\\boldsymbol{x}^{(b)}$ corresponds to the $b^{\\textrm{th}}$ row of $\\mathbf{X}$. Similarly, we can define the $B \\times K$ matrix of outputs $\\mathbf{Y}$ as\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\mathbf{Y} = \\left[ \\boldsymbol{y}^{(1)} ~ \\boldsymbol{y}^{(2)} ~ \\dots ~ \\boldsymbol{y}^{(B)} \\right]^\\mathrm{T}\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"We can then express the relationship between $\\mathbf{X}$ and $\\mathbf{Y}$ using [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) and addition as\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\mathbf{Y} = \\mathbf{X} \\mathbf{W}^\\mathrm{T} + \\mathbf{B}\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"where $\\mathbf{B} = \\left[ \\boldsymbol{b} ~ \\boldsymbol{b} ~ \\dots ~ \\boldsymbol{b} \\right]^\\mathrm{T}$ i.e. a $B \\times K$ matrix with each row corresponding to the same bias vector. The weight matrix needs to be transposed here as the inner dimensions of a matrix multiplication must match i.e. for $\\mathbf{C} = \\mathbf{A} \\mathbf{B}$ then if $\\mathbf{A}$ is of dimensionality $K \\times L$ and $\\mathbf{B}$ is of dimensionality $M \\times N$ then it must be the case that $L = M$ and $\\mathbf{C}$ will be of dimensionality $K \\times N$.\n",
|
|
"\n",
|
|
"**Your Tasks:**\n",
|
|
"\n",
|
|
"The first exercise for this lab is to implement *forward propagation* for a single-layer model consisting of an affine transformation of the inputs in the `fprop` function given as skeleton code in the cell below. This should work for a batch of inputs of shape `(batch_size, input_dim)` producing a batch of outputs of shape `(batch_size, output_dim)`.\n",
|
|
" \n",
|
|
"You will probably want to use the NumPy `dot` function and [broadcasting features](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to implement this efficiently. If you are not familiar with either of these, you may wish to read the [hints](#Hints:-Using-the-dot-function-and-broadcasting) section below which provides some tips before attempting the exercise."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def fprop(inputs, weights, biases):\n",
|
|
" \"\"\"Forward propagates activations through the layer transformation.\n",
|
|
"\n",
|
|
" For inputs `x`, outputs `y`, weights `W` and biases `b` the layer\n",
|
|
" corresponds to `y = W x + b`.\n",
|
|
"\n",
|
|
" Args:\n",
|
|
" inputs: Array of layer inputs of shape (batch_size, input_dim).\n",
|
|
" weights: Array of weight parameters of shape \n",
|
|
" (output_dim, input_dim).\n",
|
|
" biases: Array of bias parameters of shape (output_dim, ).\n",
|
|
"\n",
|
|
" Returns:\n",
|
|
" outputs: Array of layer outputs of shape (batch_size, output_dim).\n",
|
|
" \"\"\"\n",
|
|
" raise NotImplementedError(\"TODO: Implement this function.\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Once you have implemented `fprop` in the cell above you can test your implementation by running the cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"inputs = np.array([[0., -1., 2.], [-6., 3., 1.]])\n",
|
|
"weights = np.array([[2., -3., -1.], [-5., 7., 2.]])\n",
|
|
"biases = np.array([5., -3.])\n",
|
|
"true_outputs = np.array([[6., -6.], [-17., 50.]])\n",
|
|
"\n",
|
|
"if not np.allclose(fprop(inputs, weights, biases), true_outputs):\n",
|
|
" print('Wrong outputs computed.')\n",
|
|
"else:\n",
|
|
" print('All outputs correct!')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Hints: Using the `dot` function and broadcasting\n",
|
|
"\n",
|
|
"For those new to NumPy below are some details on the `dot` function and broadcasting feature of NumPy that you may want to use for implementing the first exercise. If you are already familiar with these and have already completed the first exercise you can move on straight to [second exercise](#Exercise-2:-visualising-random-models).\n",
|
|
"\n",
|
|
"#### `numpy.dot` function\n",
|
|
"\n",
|
|
"Matrix-matrix, matrix-vector and vector-vector (dot) products can all be computed in NumPy using the [`dot`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) operator which generalizes all of these operation. For example if `A` and `B` are both two dimensional arrays, then `C = np.dot(A, B)` or equivalently `C = A.dot(B)` will compute the matrix product of `A` and `B` assuming `A` and `B` have compatible dimensions. Similarly if `a` and `b` are one dimensional arrays then `c = np.dot(a, b)` (which is equivalent to `c = a.dot(b)`) will compute the [scalar / dot product](https://en.wikipedia.org/wiki/Dot_product) of the two arrays. If `A` is a two-dimensional array and `b` a one-dimensional array `np.dot(A, b)` (which is equivalent to `A.dot(b)`) will compute the matrix-vector product of `A` and `b`. Examples of all three of these product types are shown in the cell below:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Initiliase arrays with arbitrary values\n",
|
|
"A = np.arange(9).reshape((3, 3))\n",
|
|
"B = np.ones((3, 3)) * 2\n",
|
|
"a = np.array([-1., 0., 1.])\n",
|
|
"b = np.array([0.1, 0.2, 0.3])\n",
|
|
"print(A.dot(B)) # Matrix-matrix product\n",
|
|
"print(B.dot(A)) # Reversed product of above. A.dot(B) != B.dot(A) in general\n",
|
|
"print(A.dot(b)) # Matrix-vector product\n",
|
|
"print(b.dot(A)) # Again A.dot(b) != b.dot(A) unless A is symmetric i.e. A == A.T\n",
|
|
"print(a.dot(b)) # Vector-vector scalar product"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Broadcasting\n",
|
|
"\n",
|
|
"Another NumPy feature it will be helpful to get familiar with is [broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). Broadcasting allows you to apply operations to arrays of different shapes by letting numpy infer the missing parts, for example to add a one-dimensional array to a two-dimensional array or multiply a multidimensional array by a scalar. The complete set of rules for broadcasting as explained in the official documentation page just linked to can sound a bit complex: you might find the [visual explanation on this page](http://www.scipy-lectures.org/intro/numpy/operations.html#broadcasting) more intuitive.\n",
|
|
"Keep in mind that the shapes must be compatible with one another, and that it may lead to erroneous results if the shapes are not as intended, so you are advised to make sure your arrays have the proper shapes.\n",
|
|
"The cell below gives a few examples:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Initiliase arrays with arbitrary values\n",
|
|
"A = np.arange(6).reshape((3, 2))\n",
|
|
"b = np.array([0.1, 0.2])\n",
|
|
"c = np.array([-1., 0., 1.])\n",
|
|
"print(A + b) # Add b elementwise to all rows of A\n",
|
|
"print((A.T + c).T) # Add b elementwise to all columns of A\n",
|
|
"print(A * b) # Multiply each row of A elementise by b "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exercise 2: visualising random models"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"In this exercise you will use your `fprop` implementation to visualise the outputs of a single-layer affine transform model with two-dimensional inputs and a one-dimensional output. In this simple case, we can visualise the joint input-output space on a 3D axis.\n",
|
|
"\n",
|
|
"For this task and the learning experiments later in the notebook we will use a regression dataset from the UCI machine learning repository. In particular we will use a version of the [Combined Cycle Power Plant dataset](http://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant), where the task is to predict the energy output of a power plant given observations of the local ambient conditions (e.g. temperature, pressure and humidity).\n",
|
|
"\n",
|
|
"The original dataset has four input dimensions and a single target output dimension. We have preprocessed the dataset by [whitening](https://en.wikipedia.org/wiki/Whitening_transformation) it. Geometrically, this process rotates the data so that it's [principle components](https://en.wikipedia.org/wiki/Principal_component_analysis) are aligned with the basis vectors, and then scales the data so that variance along each dimension is one (see [here](https://www.quora.com/What-is-the-use-of-Whitening-images-as-a-preprocessing-step-for-a-Convolutional-Neural-Network)).\n",
|
|
"\n",
|
|
"If the original dataset has a covariance $\\mathbf{C}$, a whitening transformation $\\mathbf{D}$ is one which satisfies:\n",
|
|
"\\begin{equation}\n",
|
|
" \\mathbf{D}^{\\mathrm{T}} \\mathbf{C} \\mathbf{D} = \\mathbf{I},\n",
|
|
"\\end{equation}\n",
|
|
"where $\\mathbf{I}$ is the identity matrix.\n",
|
|
"\n",
|
|
"This can be considered a change of basis, where newly formed input features are decorrelated and have equivalent scale, which can lead to reduced learning times (see [here](https://proceedings.neurips.cc/paper/1990/file/758874998f5bd0c393da094e1967a72b-Paper.pdf)). We will only use the first two dimensions of the whitened inputs (corresponding to the first two principal components of the original inputs) so we can easily visualise the joint input-output space.\n",
|
|
"\n",
|
|
"The dataset has been wrapped in the `CCPPDataProvider` class in the `mlp.data_providers` module and the data included as a compressed file in the data directory as `ccpp_data.npz`. Running the cell below will initialise an instance of this class, get a single batch of inputs and targets, and import the necessary `matplotlib` objects."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import matplotlib.pyplot as plt\n",
|
|
"# import sys\n",
|
|
"# sys.path.append('/path/to/mlpractical')\n",
|
|
"from mpl_toolkits.mplot3d import Axes3D\n",
|
|
"from mlp.data_providers import CCPPDataProvider\n",
|
|
"\n",
|
|
"data_provider = CCPPDataProvider(\n",
|
|
" which_set='train',\n",
|
|
" input_dims=[0, 1],\n",
|
|
" batch_size=5000, \n",
|
|
" max_num_batches=1, \n",
|
|
" shuffle_order=False\n",
|
|
")\n",
|
|
"\n",
|
|
"input_dim, output_dim = 2, 1\n",
|
|
"\n",
|
|
"inputs, targets = data_provider.next()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now run the cell below to plot the predicted outputs of a randomly initialised model across the two dimensional input space as well as the true target outputs. This sort of visualisation can be a useful method (in low dimensions) to assess how well the model is likely to be able to fit the data and to judge appropriate initialisation scales for the parameters. Each time you re-run the cell a new set of random parameters will be sampled\n",
|
|
"\n",
|
|
"**Your Tasks:**\n",
|
|
"\n",
|
|
"Here you don't need to implement anything. Just run the cell for several times and try to answer the following questions:\n",
|
|
"\n",
|
|
" * How do the weights and bias initialisation scale affect the sort of predicted input-output relationships?\n",
|
|
" * Do you think a linear model is a good choice for this data?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"weights_init_range = 0.5\n",
|
|
"biases_init_range = 0.1\n",
|
|
"\n",
|
|
"# Randomly initialise weights matrix\n",
|
|
"weights = rng.uniform(\n",
|
|
" low=-weights_init_range, \n",
|
|
" high=weights_init_range, \n",
|
|
" size=(output_dim, input_dim)\n",
|
|
")\n",
|
|
"\n",
|
|
"# Randomly initialise biases vector\n",
|
|
"biases = rng.uniform(\n",
|
|
" low=-biases_init_range, \n",
|
|
" high=biases_init_range, \n",
|
|
" size=output_dim\n",
|
|
")\n",
|
|
"# Calculate predicted model outputs\n",
|
|
"outputs = fprop(inputs, weights, biases)\n",
|
|
"\n",
|
|
"# Plot target and predicted outputs against inputs on same axis\n",
|
|
"fig = plt.figure(figsize=(8, 8))\n",
|
|
"ax = fig.add_subplot(111, projection='3d')\n",
|
|
"ax.plot(inputs[:, 0], inputs[:, 1], targets[:, 0], 'r.', ms=2)\n",
|
|
"ax.plot(inputs[:, 0], inputs[:, 1], outputs[:, 0], 'b.', ms=2)\n",
|
|
"ax.set_xlabel('Input dim 1')\n",
|
|
"ax.set_ylabel('Input dim 2')\n",
|
|
"ax.set_zlabel('Output')\n",
|
|
"ax.legend(['Targets', 'Predictions'], frameon=False)\n",
|
|
"fig.tight_layout()\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exercise 3: computing the error function and its gradient\n",
|
|
"\n",
|
|
"We will now consider the task of regression as covered in the first lecture. Given a set of inputs $\\left\\lbrace \\boldsymbol{x}^{(n)}\\right\\rbrace_{n=1}^N$, the aim in a regression problem is to produce outputs $\\left\\lbrace \\boldsymbol{y}^{(n)}\\right\\rbrace_{n=1}^N$ that are as 'close' as possible to a set of targets $\\left\\lbrace \\boldsymbol{t}^{(n)}\\right\\rbrace_{n=1}^N$. The measure of 'closeness' or distance between target and predicted outputs can vary and is usually a design choice. \n",
|
|
"\n",
|
|
"A very common choice is the squared Euclidean distance between the predicted and target outputs. This can be computed as the sum of the squared differences between each element in the target and predicted outputs. A widespread convention is to multiply this value by $\\frac{1}{2}$ as this gives a slightly nicer expression for the error gradient. The error for the $n^{\\textrm{th}}$ training example is then expresed by\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" E^{(n)} = \\frac{1}{2} \\sum_{k=1}^K \\left\\lbrace \\left( y^{(n)}_k - t^{(n)}_k \\right)^2 \\right\\rbrace.\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"The overall error is defined as the *average* of this value across all training examples\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\bar{E} = \\frac{1}{N} \\sum_{n=1}^N \\left\\lbrace E^{(n)} \\right\\rbrace. \n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"*Note here we are using a slightly different convention from the lectures. There the overall error was considered to be the sum of the individual error terms rather than the mean. To differentiate between the two we will use $\\bar{E}$ to represent the average error here as opposed to sum of errors $E$ as used in the slides with $\\bar{E} = \\frac{E}{N}$. Normalising by the number of training examples is helpful to do in practice as this means we can more easily compare errors across data sets / batches of different sizes, and more importantly it means the size of our gradient updates will be independent of the number of training examples summed over.*\n",
|
|
"\n",
|
|
"Solving the regression problem means finding parameters of the model which minimise $\\bar{E}$. For our simple single-layer affine model here, that corresponds to finding weights $\\mathbf{W}$ and biases $\\boldsymbol{b}$ which minimise $\\bar{E}$. \n",
|
|
"\n",
|
|
"As mentioned in the lecture, in this case there is actually a closed form solution for the optimal weights and bias parameters. This is the linear least-squares solution those doing MLPR will have come across.\n",
|
|
"\n",
|
|
"However in general we will be interested in models where closed form solutions do not exist. Therefore, we will generally use iterative gradient descent based optimization methods to find parameters which (locally) minimise the error function. A basic requirement of being able to do gradient-descent based training is (unsuprisingly) the ability to evaluate gradients of the error function.\n",
|
|
"\n",
|
|
"Our end goal is to calculate gradients of the error function with respect to the model parameters $\\mathbf{W}$ and $\\boldsymbol{b}$. As a first step here we will consider the gradient of the error function with respect to the model outputs $\\left\\lbrace \\boldsymbol{y}^{(n)}\\right\\rbrace_{n=1}^N$. This can be written\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\frac{\\partial \\bar{E}}{\\partial \\boldsymbol{y}^{(n)}} = \\frac{1}{N} \\left( \\boldsymbol{y}^{(n)} - \\boldsymbol{t}^{(n)} \\right)\n",
|
|
" \\qquad \\Leftrightarrow \\qquad\n",
|
|
" \\frac{\\partial \\bar{E}}{\\partial y^{(n)}_k} = \\frac{1}{N} \\left( y^{(n)}_k - t^{(n)}_k \\right) \\quad \\forall k \\in \\left\\lbrace 1 \\dots K\\right\\rbrace\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"*i.e.* the gradient of the error function with respect to the $n^{\\textrm{th}}$ model output is the difference between the $n^{\\textrm{th}}$ model and target outputs, corresponding to the $\\boldsymbol{\\delta}^{(n)}$ terms mentioned in the lecture slides.\n",
|
|
"\n",
|
|
"**Your Tasks:** \n",
|
|
"\n",
|
|
"Using the equations given above, implement functions computing the mean sum of squared differences error and its gradient with respect to the model outputs. You should implement the functions using the provided skeleton definitions in the cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def error(outputs, targets):\n",
|
|
" \"\"\"Calculates error function given a batch of outputs and targets.\n",
|
|
"\n",
|
|
" Args:\n",
|
|
" outputs: Array of model outputs of shape (batch_size, output_dim).\n",
|
|
" targets: Array of target outputs of shape (batch_size, output_dim).\n",
|
|
"\n",
|
|
" Returns:\n",
|
|
" Scalar error function value.\n",
|
|
" \"\"\"\n",
|
|
" raise NotImplementedError(\"TODO implement this function\")\n",
|
|
" \n",
|
|
"def error_grad(outputs, targets):\n",
|
|
" \"\"\"Calculates gradient of error function with respect to model outputs.\n",
|
|
"\n",
|
|
" Args:\n",
|
|
" outputs: Array of model outputs of shape (batch_size, output_dim).\n",
|
|
" targets: Array of target outputs of shape (batch_size, output_dim).\n",
|
|
"\n",
|
|
" Returns:\n",
|
|
" Gradient of error function with respect to outputs.\n",
|
|
" This will be an array of shape (batch_size, output_dim).\n",
|
|
" \"\"\"\n",
|
|
" raise NotImplementedError(\"TODO implement this function\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Check your implementation by running the test cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"outputs = np.array([[1., 2.], [-1., 0.], [6., -5.], [-1., 1.]])\n",
|
|
"targets = np.array([[0., 1.], [3., -2.], [7., -3.], [1., -2.]])\n",
|
|
"true_error = 5.\n",
|
|
"true_error_grad = np.array([[0.25, 0.25], [-1., 0.5], [-0.25, -0.5], [-0.5, 0.75]])\n",
|
|
"\n",
|
|
"if not error(outputs, targets) == true_error:\n",
|
|
" print('Error calculated incorrectly.')\n",
|
|
"elif not np.allclose(error_grad(outputs, targets), true_error_grad):\n",
|
|
" print('Error gradient calculated incorrectly.')\n",
|
|
"else:\n",
|
|
" print('Error function and gradient computed correctly!')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exercise 4: computing gradients with respect to the parameters\n",
|
|
"\n",
|
|
"In the previous exercise you implemented a function computing the gradient of the error function with respect to the model outputs. For gradient-descent based training, we need to be able to evaluate the gradient of the error function with respect to the model parameters.\n",
|
|
"\n",
|
|
"Using the [chain rule for derivatives](https://en.wikipedia.org/wiki/Chain_rule#Higher_dimensions), we can write the partial deriviative of the error function with respect to single elements of the weight matrix and bias vector as\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\frac{\\partial E}{\\partial W_{kj}} = \\sum_{n=1}^N \\left\\lbrace \\frac{\\partial E}{\\partial y^{(n)}_k} \\frac{\\partial y^{(n)}_k}{\\partial W_{kj}} \\right\\rbrace\n",
|
|
" \\quad \\textrm{and} \\quad\n",
|
|
" \\frac{\\partial E}{\\partial b_k} = \\sum_{n=1}^N \\left\\lbrace \\frac{\\partial E}{\\partial y^{(n)}_k} \\frac{\\partial y^{(n)}_k}{\\partial b_k} \\right\\rbrace.\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"From the definition of our model at the beginning we have \n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" y^{(n)}_k = \\sum_{d=1}^D \\left\\lbrace W_{kd} x^{(n)}_d \\right\\rbrace + b_k\n",
|
|
" \\quad \\Rightarrow \\quad\n",
|
|
" \\frac{\\partial y^{(n)}_k}{\\partial W_{kj}} = x^{(n)}_j\n",
|
|
" \\quad \\textrm{and} \\quad\n",
|
|
" \\frac{\\partial y^{(n)}_k}{\\partial b_k} = 1.\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"Putting this together we get that\n",
|
|
"\n",
|
|
"\\begin{equation}\n",
|
|
" \\frac{\\partial E}{\\partial W_{kj}} = \n",
|
|
" \\sum_{n=1}^N \\left\\lbrace \\frac{\\partial E}{\\partial y^{(n)}_k} x^{(n)}_j \\right\\rbrace\n",
|
|
" \\quad \\textrm{and} \\quad\n",
|
|
" \\frac{\\partial E}{\\partial b_{k}} = \n",
|
|
" \\sum_{n=1}^N \\left\\lbrace \\frac{\\partial E}{\\partial y^{(n)}_k} \\right\\rbrace.\n",
|
|
"\\end{equation}\n",
|
|
"\n",
|
|
"Although this may seem a bit of a roundabout way to get to these results, this method of decomposing the error gradient with respect to the parameters in terms of the gradient of the error function with respect to the model outputs and the derivatives of the model outputs with respect to the model parameters is the key element that allows calculating the parameter gradients of more complex models we will study later in the course.\n",
|
|
"\n",
|
|
"**Your Tasks:** \n",
|
|
"\n",
|
|
"Implement a function calculating the gradient of the error function with respect to the weight and bias parameters of the model given the already computed gradient of the error function with respect to the model outputs. You should implement this in the `grads_wrt_params` function in the cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def grads_wrt_params(inputs, grads_wrt_outputs):\n",
|
|
" \"\"\"Calculates gradients with respect to model parameters.\n",
|
|
"\n",
|
|
" Args:\n",
|
|
" inputs: array of inputs to model of shape (batch_size, input_dim)\n",
|
|
" grads_wrt_to_outputs: array of gradients of with respect to the model\n",
|
|
" outputs of shape (batch_size, output_dim).\n",
|
|
"\n",
|
|
" Returns:\n",
|
|
" list of arrays of gradients with respect to the model parameters\n",
|
|
" `[grads_wrt_weights, grads_wrt_biases]`.\n",
|
|
" \"\"\"\n",
|
|
" raise NotImplementedError(\"TODO implement this function\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Check your implementation by running the test cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"inputs = np.array([[1., 2., 3.], [-1., 4., -9.]])\n",
|
|
"grads_wrt_outputs = np.array([[-1., 1.], [2., -3.]])\n",
|
|
"true_grads_wrt_weights = np.array([[-3., 6., -21.], [4., -10., 30.]])\n",
|
|
"true_grads_wrt_biases = np.array([1., -2.])\n",
|
|
"\n",
|
|
"grads_wrt_weights, grads_wrt_biases = grads_wrt_params(\n",
|
|
" inputs, grads_wrt_outputs)\n",
|
|
"\n",
|
|
"if not np.allclose(true_grads_wrt_weights, grads_wrt_weights):\n",
|
|
" print('Gradients with respect to weights incorrect.')\n",
|
|
"elif not np.allclose(true_grads_wrt_biases, grads_wrt_biases):\n",
|
|
" print('Gradients with respect to biases incorrect.')\n",
|
|
"else:\n",
|
|
" print('All parameter gradients calculated correctly!')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exercise 5: wrapping the functions into reusable components\n",
|
|
"\n",
|
|
"In exercises 1, 3 and 4 you implemented methods to compute the predicted outputs of our model, evaluate the error function and its gradient on the outputs and finally to calculate the gradients of the error with respect to the model parameters. Together they constitute all the basic ingredients we need to implement a gradient-descent based iterative learning procedure.\n",
|
|
"\n",
|
|
"Although you could implement training code which directly uses the functions you defined, this would only be usable for this particular model architecture. In subsequent labs we will want to use the affine transform functions as the basis for more interesting multi-layer models. We will therefore wrap the implementations you just wrote in to reusable components that we can combine to build more complex models later in the course.\n",
|
|
"\n",
|
|
"**Your Tasks:**\n",
|
|
"\n",
|
|
" * In the [`mlp.layers`](../mlp/layers.py) module, use your implementations of `fprop` and `grad_wrt_params` above to implement the corresponding methods in the skeleton `AffineLayer` class provided.\n",
|
|
" * In the [`mlp.errors`](../mlp/errors.py) module use your implementation of `error` and `error_grad` to implement the `__call__` and `grad` methods respectively of the skeleton `SumOfSquaredDiffsError` class provided. Note `__call__` is a special Python method that allows an object to be used with a function call syntax.\n",
|
|
" * All functions where you need to implement has been marked with a `#TODO` comment. You don't need to implement other functions right now.\n",
|
|
"\n",
|
|
"Run the cell below to use your completed `AffineLayer` and `SumOfSquaredDiffsError` implementations to train a single-layer model using batch gradient descent on the CCPP dataset. Remember to reload the notebook if you made changes to the `mlp` module."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from mlp.layers import AffineLayer\n",
|
|
"from mlp.errors import SumOfSquaredDiffsError\n",
|
|
"from mlp.models import SingleLayerModel\n",
|
|
"from mlp.initialisers import UniformInit, ConstantInit\n",
|
|
"from mlp.learning_rules import GradientDescentLearningRule\n",
|
|
"from mlp.optimisers import Optimiser\n",
|
|
"import logging\n",
|
|
"\n",
|
|
"# Seed a random number generator\n",
|
|
"seed = 27092016 \n",
|
|
"rng = np.random.RandomState(seed)\n",
|
|
"\n",
|
|
"# Set up a logger object to print info about the training run to stdout\n",
|
|
"logger = logging.getLogger()\n",
|
|
"logger.setLevel(logging.INFO)\n",
|
|
"logger.handlers = [logging.StreamHandler()]\n",
|
|
"\n",
|
|
"# Create data provider objects for the CCPP training set\n",
|
|
"train_data = CCPPDataProvider('train', [0, 1], batch_size=100, rng=rng)\n",
|
|
"input_dim, output_dim = 2, 1\n",
|
|
"\n",
|
|
"# Create a parameter initialiser which will sample random uniform values\n",
|
|
"# from [-0.1, 0.1]\n",
|
|
"param_init = UniformInit(-0.1, 0.1, rng=rng)\n",
|
|
"\n",
|
|
"# Create our single layer model\n",
|
|
"layer = AffineLayer(input_dim, output_dim, param_init, param_init)\n",
|
|
"model = SingleLayerModel(layer)\n",
|
|
"\n",
|
|
"# Initialise the error object\n",
|
|
"error = SumOfSquaredDiffsError()\n",
|
|
"\n",
|
|
"# Use a basic gradient descent learning rule with a small learning rate\n",
|
|
"learning_rule = GradientDescentLearningRule(learning_rate=1e-2)\n",
|
|
"\n",
|
|
"# Use the created objects to initialise a new Optimiser instance.\n",
|
|
"optimiser = Optimiser(model, error, learning_rule, train_data)\n",
|
|
"\n",
|
|
"# Run the optimiser for 5 epochs (full passes through the training set)\n",
|
|
"# printing statistics every epoch.\n",
|
|
"stats, keys, _ = optimiser.train(num_epochs=10, stats_interval=1)\n",
|
|
"\n",
|
|
"# Plot the change in the error over training.\n",
|
|
"fig = plt.figure(figsize=(8, 4))\n",
|
|
"ax = fig.add_subplot(111)\n",
|
|
"ax.plot(np.arange(1, stats.shape[0] + 1), stats[:, keys['error(train)']])\n",
|
|
"ax.set_xlabel('Epoch number')\n",
|
|
"ax.set_ylabel('Error')\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Using similar code to exercise 2, we can visualise the joint input-output space for the trained model. If you implemented the required methods correctly you should now see a much improved fit between predicted and target outputs when running the cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"data_provider = CCPPDataProvider(\n",
|
|
" which_set='train',\n",
|
|
" input_dims=[0, 1],\n",
|
|
" batch_size=5000, \n",
|
|
" max_num_batches=1, \n",
|
|
" shuffle_order=False\n",
|
|
")\n",
|
|
"\n",
|
|
"inputs, targets = data_provider.next()\n",
|
|
"\n",
|
|
"# Calculate predicted model outputs\n",
|
|
"outputs = model.fprop(inputs)[-1]\n",
|
|
"\n",
|
|
"# Plot target and predicted outputs against inputs on same axis\n",
|
|
"fig = plt.figure(figsize=(8, 8))\n",
|
|
"ax = fig.add_subplot(111, projection='3d')\n",
|
|
"ax.plot(inputs[:, 0], inputs[:, 1], targets[:, 0], 'r.', ms=2)\n",
|
|
"ax.plot(inputs[:, 0], inputs[:, 1], outputs[:, 0], 'b.', ms=2)\n",
|
|
"ax.set_xlabel('Input dim 1')\n",
|
|
"ax.set_ylabel('Input dim 2')\n",
|
|
"ax.set_zlabel('Output')\n",
|
|
"ax.legend(['Targets', 'Predictions'], frameon=False)\n",
|
|
"fig.tight_layout()\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exercise 6: visualising training trajectories in parameter space"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Running the cell below will display an interactive widget which plots the trajectories of gradient-based training of the single-layer affine model on the CCPP dataset in the three dimensional parameter space (two weights plus bias) from random initialisations. Also shown on the right is a plot of the evolution of the error function (evaluated on the current batch) over training. By moving the sliders you can alter the training hyperparameters to investigate the effect they have on how training procedes. The hyperparameters are as follows:\n",
|
|
"\n",
|
|
"- `n_epochs` : number of training epochs,\n",
|
|
"- `batch_size` : number of data points per batch,\n",
|
|
"- `log_lr` : logarithm of the learning rate,\n",
|
|
"- `n_inits` : number of different parameter initializations,\n",
|
|
"- `w_scale` : min/max initial weight value,\n",
|
|
"- `b_scale` : min/max initial bias value,\n",
|
|
"- `elev`/`azim` : spherical coordinates for camera position.\n",
|
|
"\n",
|
|
"When adjusting these hyperparameters, keep in mind that the magnitude of each (per batch) update is independent of the batch size. Increasing the batch size may there for necessitate a larger number of epochs to ensure convergence, or a larger learning rate.\n",
|
|
"\n",
|
|
"**Your Tasks:**\n",
|
|
"\n",
|
|
"No need to implement anything. Run the cell and explore the following questions:\n",
|
|
"\n",
|
|
" * Are there multiple local minima in parameter space here? Why?\n",
|
|
" * What are the effects of using very small learning rates? And very large learning ones?\n",
|
|
" * How does the batch size affect learning?\n",
|
|
" \n",
|
|
"**Note:** You don't need to understand how the code below works. The idea of this exercise is to help you understand the role of the various hyperparameters involved in gradient-descent based training methods."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from ipywidgets import interact\n",
|
|
"%matplotlib inline\n",
|
|
"\n",
|
|
"def setup_figure():\n",
|
|
" # create figure and axes\n",
|
|
" fig = plt.figure(figsize=(12, 6))\n",
|
|
" ax1 = fig.add_axes([0., 0., 0.5, 1.], projection='3d')\n",
|
|
" ax2 = fig.add_axes([0.6, 0.1, 0.4, 0.8])\n",
|
|
" # set axes properties\n",
|
|
" ax2.spines['right'].set_visible(False)\n",
|
|
" ax2.spines['top'].set_visible(False)\n",
|
|
" ax2.yaxis.set_ticks_position('left')\n",
|
|
" ax2.xaxis.set_ticks_position('bottom')\n",
|
|
" #ax2.set_yscale('log')\n",
|
|
" ax1.set_xlim((-2, 2))\n",
|
|
" ax1.set_ylim((-2, 2))\n",
|
|
" ax1.set_zlim((-2, 2))\n",
|
|
" #set axes labels and title\n",
|
|
" ax1.set_title('Parameter trajectories over training')\n",
|
|
" ax1.set_xlabel('Weight 1')\n",
|
|
" ax1.set_ylabel('Weight 2')\n",
|
|
" ax1.set_zlabel('Bias')\n",
|
|
" ax2.set_title('Batch errors over training')\n",
|
|
" ax2.set_xlabel('Batch update number')\n",
|
|
" ax2.set_ylabel('Batch error')\n",
|
|
" return fig, ax1, ax2\n",
|
|
"\n",
|
|
"def visualise_training(n_epochs=1, batch_size=200, log_lr=-1., n_inits=1,\n",
|
|
" w_scale=1., b_scale=1., elev=30., azim=0.):\n",
|
|
" fig, ax1, ax2 = setup_figure()\n",
|
|
" # create seeded random number generator\n",
|
|
" rng = np.random.RandomState(1234)\n",
|
|
" # create data provider\n",
|
|
" data_provider = CCPPDataProvider(\n",
|
|
" input_dims=[0, 1],\n",
|
|
" batch_size=batch_size, \n",
|
|
" shuffle_order=False,\n",
|
|
" )\n",
|
|
" learning_rate = 10 ** log_lr\n",
|
|
" n_batches = data_provider.num_batches\n",
|
|
" weights_traj = np.empty((n_inits, n_epochs * n_batches + 1, 1, 2))\n",
|
|
" biases_traj = np.empty((n_inits, n_epochs * n_batches + 1, 1))\n",
|
|
" errors_traj = np.empty((n_inits, n_epochs * n_batches))\n",
|
|
" # randomly initialise parameters\n",
|
|
" weights = rng.uniform(-w_scale, w_scale, (n_inits, 1, 2))\n",
|
|
" biases = rng.uniform(-b_scale, b_scale, (n_inits, 1))\n",
|
|
" # store initial parameters\n",
|
|
" weights_traj[:, 0] = weights\n",
|
|
" biases_traj[:, 0] = biases\n",
|
|
" # iterate across different initialisations\n",
|
|
" for i in range(n_inits):\n",
|
|
" # iterate across epochs\n",
|
|
" for e in range(n_epochs):\n",
|
|
" # iterate across batches\n",
|
|
" for b, (inputs, targets) in enumerate(data_provider):\n",
|
|
" outputs = fprop(inputs, weights[i], biases[i])\n",
|
|
" errors_traj[i, e * n_batches + b] = error(outputs, targets)\n",
|
|
" grad_wrt_outputs = error_grad(outputs, targets)\n",
|
|
" weights_grad, biases_grad = grads_wrt_params(inputs, grad_wrt_outputs)\n",
|
|
" weights[i] -= learning_rate * weights_grad\n",
|
|
" biases[i] -= learning_rate * biases_grad\n",
|
|
" weights_traj[i, e * n_batches + b + 1] = weights[i]\n",
|
|
" biases_traj[i, e * n_batches + b + 1] = biases[i]\n",
|
|
" # choose a different color for each trajectory\n",
|
|
" colors = plt.cm.jet(np.linspace(0, 1, n_inits))\n",
|
|
" # plot all trajectories\n",
|
|
" for i in range(n_inits):\n",
|
|
" lines_1 = ax1.plot(\n",
|
|
" weights_traj[i, :, 0, 0], \n",
|
|
" weights_traj[i, :, 0, 1], \n",
|
|
" biases_traj[i, :, 0], \n",
|
|
" '-', c=colors[i], lw=2)\n",
|
|
" lines_2 = ax2.plot(\n",
|
|
" np.arange(n_batches * n_epochs),\n",
|
|
" errors_traj[i],\n",
|
|
" c=colors[i]\n",
|
|
" )\n",
|
|
" ax1.view_init(elev, azim)\n",
|
|
" plt.show()\n",
|
|
"\n",
|
|
"w = interact(\n",
|
|
" visualise_training,\n",
|
|
" elev=(-90, 90, 2),\n",
|
|
" azim=(-180, 180, 2), \n",
|
|
" n_epochs=(1, 50), \n",
|
|
" batch_size=(10, 1000, 100),\n",
|
|
" log_lr=(-5., 1.),\n",
|
|
" w_scale=(0., 4.),\n",
|
|
" b_scale=(0., 4.),\n",
|
|
" n_inits=(1, 10)\n",
|
|
")\n",
|
|
"\n",
|
|
"for child in w.widget.children:\n",
|
|
" child.layout.width = '100%'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Hints:\n",
|
|
"- Remember an affine single layer model is an linear with respect to it's parameters, such that any given output $y$ can be expressed as $y = w_1 x_1 + w_2 x_2 + b$. Subsituting this into the loss function we have: \n",
|
|
"\\begin{equation}\n",
|
|
" E = \\sum_{n=1}^K \\frac{1}{2} \\left( y^{(n)} - t^{(n)} \\right)^2 = \\sum_{n=1}^K \\frac{1}{2} \\left( w_1 x_1^{(n)} + w_2 x_2^{(n)} + b - t^{(n)} \\right)^2.\n",
|
|
"\\end{equation}\n",
|
|
"The loss surface is therefore *quadratic* with respect to parameters $w_1, w_2, b$. What effect does this have on the number of minima?\n",
|
|
"\n",
|
|
"- Note that by using batch-wise updates, we are computing gradients of loss surface described by a subset $B < N$ of the training data:\n",
|
|
"\\begin{equation}\n",
|
|
" E = \\sum_{n=1}^B \\frac{1}{2} \\left( y^{(n)} - t^{(n)} \\right)^2.\n",
|
|
"\\end{equation}\n",
|
|
"Hence, this gradient direction is only an approximation of the optimal update direction dictated by the full dataset. With very small batch sizes, what convergence behaviour would we therefore expect?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# PyTorch\n",
|
|
"\n",
|
|
"PyTorch is a deep-learning framework that allows us to easily build and train neural networks. It is based on the concept of [tensors](https://pytorch.org/docs/stable/tensors.html), which are multidimensional arrays. In this section, we will use PyTorch to build a simple neural network and train it on the Combined Cycle Power Plant dataset dataset."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"import torch\n",
|
|
"import torch.nn as nn\n",
|
|
"import torch.optim as optim\n",
|
|
"from torch.utils.data import Dataset, DataLoader"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To ensure [reproducibility](https://pytorch.org/docs/stable/notes/randomness.html), we wil set the seed of the random number generator to a fixed value.\n",
|
|
"\n",
|
|
"We will also use the same hyperparameters as in the previous section."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 20,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"torch.manual_seed(seed)\n",
|
|
"\n",
|
|
"learning_rate = 1e-2\n",
|
|
"num_epochs = 10\n",
|
|
"batch_size = 100\n",
|
|
"input_dim = 2\n",
|
|
"output_dim = 1"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To work with data in PyTorch, we need to create a [Dataset](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) object. This object will be used by a [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) object to load the data in batches. The DataLoader object will also shuffle the data at each epoch. \n",
|
|
"\n",
|
|
"*Here, we do not shuffle the data by setting `shuffle=False` as we want to compare the results with the previous section. However, it is strongly advised to shuffle the data in the training set, but not in the validation or the test set. Can you think about why?*\n",
|
|
"\n",
|
|
"For a dataset to be used with PyTorch, it need to have the following methods:\n",
|
|
"- `__len__` : returns the size of the dataset,\n",
|
|
"- `__getitem__` : returns the $i^{\\textrm{th}}$ sample of the dataset.\n",
|
|
"\n",
|
|
"Also, the data needs to be converted to PyTorch tensors. This can be done by using the [TensorDataset](https://pytorch.org/docs/stable/data.html#torch.utils.data.TensorDataset) class or the `torch.from_numpy()` method."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class CCPPDataProvider(Dataset):\n",
|
|
" \"\"\"Combined Cycle Power Plant dataset.\"\"\"\n",
|
|
" \n",
|
|
" def __init__(self, data_path, which_set='train', x_dims=None):\n",
|
|
" super().__init__()\n",
|
|
" self.data = np.load(data_path)\n",
|
|
"\n",
|
|
" assert which_set in ['train', 'valid'], (\n",
|
|
" 'Expected which_set to be either train or valid '\n",
|
|
" 'Got {0}'.format(which_set)\n",
|
|
" )\n",
|
|
" self.x = self.data[which_set + '_inputs']\n",
|
|
" if x_dims is not None:\n",
|
|
" self.x = self.x[:, x_dims]\n",
|
|
" self.x = torch.from_numpy(self.x).to(torch.float32)\n",
|
|
" self.t = self.data[which_set + '_targets']\n",
|
|
" self.t = torch.from_numpy(self.t).to(torch.float32)\n",
|
|
"\n",
|
|
" def __len__(self):\n",
|
|
" return len(self.x)\n",
|
|
" \n",
|
|
" def __getitem__(self, idx):\n",
|
|
" return self.x[idx], self.t[idx]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The `Linear` layer, also called a fully-connected layer perform the affine operation described above by combining the input data with the weights and biases."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 22,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"class SingleLayerModel(nn.Module):\n",
|
|
" \"\"\"Single layer model.\"\"\"\n",
|
|
" def __init__(self, input_dim, output_dim):\n",
|
|
" super().__init__()\n",
|
|
" self.layer = nn.Linear(input_dim, output_dim) \n",
|
|
" \n",
|
|
" def forward(self, x):\n",
|
|
" return self.layer(x)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The weights and biases for each layer (neural network parameters) are initialized randomly but we can decide what distribution to sample from using the [`torch.init.module`](https://pytorch.org/docs/stable/nn.init.html). Here, we will use a uniform distribution for the weights and set the biases to 0."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def weights_init(m):\n",
|
|
" \"\"\"Reinitialize model weights\"\"\"\n",
|
|
" classname = m.__class__.__name__\n",
|
|
" if classname.find('Linear') != -1:\n",
|
|
" nn.init.uniform_(m.weight.data, -0.1, 0.1)\n",
|
|
" nn.init.constant_(m.bias.data, 0)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"data_path = os.path.join(os.environ['MLP_DATA_DIR'], 'ccpp_data.npz')\n",
|
|
"assert os.path.isfile(data_path), ('Data file does not exist at expected path: ' + data_path)\n",
|
|
"\n",
|
|
"dataset = CCPPDataProvider(data_path, which_set='train', x_dims=[0, 1])\n",
|
|
"\n",
|
|
"dataloader = DataLoader(dataset, batch_size=100, shuffle=False)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The error between predictions and ground truth values is calculated using the means square error loss function. This function is implemented in PyTorch as [`torch.nn.MSELoss`](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html).\n",
|
|
"\n",
|
|
"There are many [optimisers](https://pytorch.org/docs/stable/optim.html#module-torch.optim) available in PyTorch. Here, we will use the [Adam](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) optimiser. This optimiser takes as input the parameters to optimise and the learning rate."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"model = SingleLayerModel(input_dim, output_dim)\n",
|
|
"model.apply(weights_init)\n",
|
|
"\n",
|
|
"print(f\"Model structure: {model}\\n\\n\")\n",
|
|
"\n",
|
|
"loss = nn.MSELoss() # Mean Squared Error loss\n",
|
|
"optimizer = optim.Adam(model.parameters(), lr=learning_rate) # Adam optimiser"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The training loop will be similar to the one in the previous section. For every epoch, we will iterate through the datase by batches. For each batch, we will compute the predictions, the loss and the gradients. We will then update the parameters using the gradients and the optimiser.\n",
|
|
"\n",
|
|
"However, we will use the [`torch.optim.zero_grad()`](https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html) method to set the gradients to zero before computing the gradients of the loss function with respect to the parameters. *Think about why we need to do this and what would happens if we do not?* \n",
|
|
"\n",
|
|
"We will also use the [`torch.optim.step()`](https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.step.html) method to update the parameters."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Keep track of the loss values over training\n",
|
|
"train_loss = [] \n",
|
|
"\n",
|
|
"for epoch in range(num_epochs):\n",
|
|
" model.train()\n",
|
|
" epoch_loss = 0\n",
|
|
"\n",
|
|
" for x, t in dataloader:\n",
|
|
" y = model(x)\n",
|
|
" E_value = loss(y, t)\n",
|
|
" optimizer.zero_grad()\n",
|
|
" E_value.backward()\n",
|
|
" optimizer.step()\n",
|
|
" epoch_loss += E_value.item()\n",
|
|
" # Calculate average loss for this epoch\n",
|
|
" avg_epoch_loss = epoch_loss / len(dataloader)\n",
|
|
" print(f\"Epoch [{epoch+1}/{num_epochs}]\\tError(train): {avg_epoch_loss:.4f}\")\n",
|
|
" train_loss.append(avg_epoch_loss)\n",
|
|
"\n",
|
|
"# Plot the change in the error over training.\n",
|
|
"fig = plt.figure(figsize=(8, 4))\n",
|
|
"ax = fig.add_subplot(111)\n",
|
|
"ax.plot(train_loss)\n",
|
|
"ax.set_xlabel('Epoch number')\n",
|
|
"ax.set_ylabel('Error')\n",
|
|
"fig.tight_layout()\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"When the training is finished, we can compute the predictions and plot them along with the the ground truth values. \n",
|
|
"\n",
|
|
"Here, we will use the [`torch.no_grad()`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) context manager to disable gradient calculation as we do not need it for the predictions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 27,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"predictions = []\n",
|
|
"inputs = []\n",
|
|
"targets = []\n",
|
|
"\n",
|
|
"with torch.no_grad():\n",
|
|
" model.eval()\n",
|
|
" for x, t in dataloader:\n",
|
|
" inputs.append(x.numpy())\n",
|
|
" targets.append(t.numpy())\n",
|
|
" y = model(x)\n",
|
|
" predictions.append(y.numpy())\n",
|
|
" \n",
|
|
"predictions = np.concatenate(predictions, axis=0)\n",
|
|
"inputs = np.concatenate(inputs, axis=0)\n",
|
|
"targets = np.concatenate(targets, axis=0)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Plot target and predicted outputs against inputs on same axis\n",
|
|
"fig = plt.figure(figsize=(8, 8))\n",
|
|
"ax = fig.add_subplot(111, projection='3d')\n",
|
|
"ax.plot(inputs[:, 0], inputs[:, 1], targets[:, 0], 'r.', ms=2)\n",
|
|
"ax.plot(inputs[:, 0], inputs[:, 1], predictions[:, 0], 'b.', ms=2)\n",
|
|
"ax.set_xlabel('Input dim 1')\n",
|
|
"ax.set_ylabel('Input dim 2')\n",
|
|
"ax.set_zlabel('Output')\n",
|
|
"ax.legend(['Targets', 'Predictions'], frameon=False)\n",
|
|
"fig.tight_layout()\n",
|
|
"plt.show()"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"anaconda-cloud": {},
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.12.5"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 4
|
|
}
|