mlpractical/notebooks/02_Linear_models.ipynb

651 lines
24 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"-"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Single Layer Models\n",
"\n",
"***\n",
"### Note on storing matrices in computer memory\n",
"\n",
"Suppose you want to store the following matrix in memory: $\\left[ \\begin{array}{ccc}\n",
"1 & 2 & 3 \\\\\n",
"4 & 5 & 6 \\\\\n",
"7 & 8 & 9 \\end{array} \\right]$ \n",
"\n",
"If you allocate the memory at once for the whole matrix, then the above matrix would be organised as a vector in one of two possible forms:\n",
"\n",
"* Row-wise layout where the order would look like: $\\left [ \\begin{array}{ccccccccc}\n",
"1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\end{array} \\right ]$\n",
"* Column-wise layout where the order would look like: $\\left [ \\begin{array}{ccccccccc}\n",
"1 & 4 & 7 & 2 & 5 & 8 & 3 & 6 & 9 \\end{array} \\right ]$\n",
"\n",
"Although `numpy` can easily handle both formats (possibly with some computational overhead), in our code we will stick with the more modern (and default) `C`-like approach and use the row-wise format (in contrast to Fortran that used a column-wise approach). \n",
"\n",
"This means, that in this tutorial:\n",
"* vectors are kept row-wise $\\mathbf{x} = (x_1, x_1, \\ldots, x_D) $ (rather than $\\mathbf{x} = (x_1, x_1, \\ldots, x_D)^T$)\n",
"* similarly, in case of matrices we will stick to: $\\left[ \\begin{array}{cccc}\n",
"x_{11} & x_{12} & \\ldots & x_{1D} \\\\\n",
"x_{21} & x_{22} & \\ldots & x_{2D} \\\\\n",
"x_{31} & x_{32} & \\ldots & x_{3D} \\\\ \\end{array} \\right]$ and each row (i.e. $\\left[ \\begin{array}{cccc} x_{11} & x_{12} & \\ldots & x_{1D} \\end{array} \\right]$) represents a single data-point (like one MNIST image or one window of observations)\n",
"\n",
"In lecture slides you will find the equations following the conventional mathematical approach, using column vectors, but you can easily map between column-major and row-major organisations using a matrix transpose.\n",
"\n",
"***\n",
"\n",
"## Linear and Affine Transforms\n",
"\n",
"The basis of all linear models is the so called affine transform, which is a transform that implements a linear transformation and translation of the input features. The transforms we are going to use are parameterised by:\n",
"\n",
" * A weight matrix $\\mathbf{W} \\in \\mathbb{R}^{D\\times K}$: where element $w_{ik}$ is the weight from input $x_i$ to output $y_k$\n",
" * A bias vector $\\mathbf{b}\\in R^{K}$ : where element $b_{k}$ is the bias for output $k$\n",
"\n",
"Note, the bias is simply some additive term, and can be easily incorporated into an additional row in weight matrix and an additional input in the inputs which is set to $1.0$ (as in the below picture taken from the lecture slides). However, here (and in the code) we will keep them separate.\n",
"\n",
"![Making Predictions](res/singleLayerNetWts-1.png)\n",
"\n",
"For instance, for the above example of 5-dimensional input vector by $\\mathbf{x} = (x_1, x_2, x_3, x_4, x_5)$, weight matrix $\\mathbf{W}=\\left[ \\begin{array}{ccc}\n",
"w_{11} & w_{12} & w_{13} \\\\\n",
"w_{21} & w_{22} & w_{23} \\\\\n",
"w_{31} & w_{32} & w_{33} \\\\\n",
"w_{41} & w_{42} & w_{43} \\\\\n",
"w_{51} & w_{52} & w_{53} \\\\ \\end{array} \\right]$, bias vector $\\mathbf{b} = (b_1, b_2, b_3)$ and outputs $\\mathbf{y} = (y_1, y_2, y_3)$, one can write the transformation as follows:\n",
"\n",
"(for the $i$-th output)\n",
"\n",
"(1) $\n",
"\\begin{equation}\n",
" y_i = b_i + \\sum_j x_jw_{ji}\n",
"\\end{equation}\n",
"$\n",
"\n",
"or the equivalent vector form (where $\\mathbf w_i$ is the $i$-th column of $\\mathbf W$, but note, when we **slice** the $i$th column we will get a **vector** $\\mathbf w_i = (w_{1i}, w_{2i}, w_{3i}, w_{4i}, w_{5i})$, hence the transpose for $\\mathbf w_i$ in the below equation):\n",
"\n",
"(2) $\n",
"\\begin{equation}\n",
" y_i = b_i + \\mathbf x \\mathbf w_i^T\n",
"\\end{equation}\n",
"$\n",
"\n",
"The same operation can be also written in matrix form, to compute all the outputs $\\mathbf{y}$ at the same time:\n",
"\n",
"(3) $\n",
"\\begin{equation}\n",
" \\mathbf y=\\mathbf x\\mathbf W + \\mathbf b\n",
"\\end{equation}\n",
"$\n",
"\n",
"This is equivalent to slides 12/13 in lecture 1, except we are using row vectors.\n",
"\n",
"When $\\mathbf{x}$ is a mini-batch (contains $B$ data-points of dimension $D$ each), i.e. $\\left[ \\begin{array}{cccc}\n",
"x_{11} & x_{12} & \\ldots & x_{1D} \\\\\n",
"x_{21} & x_{22} & \\ldots & x_{2D} \\\\\n",
"\\cdots \\\\\n",
"x_{B1} & x_{B2} & \\ldots & x_{BD} \\\\ \\end{array} \\right]$ equation (3) effectively becomes to be\n",
"\n",
"(4) $\n",
"\\begin{equation}\n",
" \\mathbf Y=\\mathbf X\\mathbf W + \\mathbf b\n",
"\\end{equation}\n",
"$\n",
"\n",
"where $\\mathbf{W} \\in \\mathbb{R}^{D\\times K}$ and both $\\mathbf{X}\\in\\mathbb{R}^{B\\times D}$ and $\\mathbf{Y}\\in\\mathbb{R}^{B\\times K}$ are matrices, and $\\mathbf{b}\\in\\mathbb{R}^{1\\times K}$ needs to be <a href=\"http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html\">broadcasted</a> $B$ times (numpy will do this by default). However, we will not make an explicit distinction between a special case for $B=1$ and $B>1$ and simply use equation (3) instead, although $\\mathbf{x}$ and hence $\\mathbf{y}$ could be matrices. From an implementation point of view, it does not matter.\n",
"\n",
"The desired functionality for matrix multiplication in numpy is provided by <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html\">numpy.dot</a> function. If you haven't use it so far, get familiar with it as we will use it extensively."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### A general note on random number generators\n",
"\n",
"It is generally a good practice (for machine learning applications **not** for cryptography!) to seed a pseudo-random number generator once at the beginning of the experiment, and use it later through the code where necesarry. This makes it easier to reproduce results since random initialisations can be replicated. As such, within this course we are going use a single random generator object, similar to the below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy\n",
"\n",
"#initialise the random generator to be used later\n",
"seed=[2015, 10, 1]\n",
"random_generator = numpy.random.RandomState(seed)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 1 \n",
"\n",
"Using `numpy.dot`, implement **forward** propagation through the linear transform defined by equations (3) and (4) for $B=1$ and $B>1$ i.e. use parameters $\\mathbf{W}$ and $\\mathbf{b}$ with data $\\mathbf{X}$ to determine $\\mathbf{Y}$. Use `MNISTDataProvider` (introduced last week) to generate $\\mathbf{X}$. We are going to write a function for each equation:\n",
"1. `y1_equation_1`: Return the value of the $1^{st}$ dimension of $\\mathbf{y}$ (the output of the first output node) given a single training data point $\\mathbf{x}$ using a sum\n",
"1. `y1_equation_2`: Repeat above using vector multiplication (use `numpy.dot()`)\n",
"1. `y_equation_3`: Return the value of $\\mathbf{y}$ (the whole output layer) given a single training data point $\\mathbf{x}$\n",
"1. `Y_equation_4`: Return the value of $\\mathbf{Y}$ given $\\mathbf{X}$\n",
"\n",
"We have initialised $\\mathbf{b}$ to zeros and randomly generated $\\mathbf{W}$ for you. The constants introduced above are:\n",
"* The number of data points $B = 3$\n",
"* The dimensionality of the input $D = 784$\n",
"* The dimensionality of the output $K = 10$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from mlp.dataset import MNISTDataProvider\n",
"\n",
"mnist_dp = MNISTDataProvider(dset='valid', batch_size=3, max_num_batches=1, randomize=False)\n",
"B = 3\n",
"D = 784\n",
"K = 10\n",
"irange = 0.1\n",
"W = random_generator.uniform(-irange, irange, (D, K)) \n",
"b = numpy.zeros((10,))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"\n",
"mnist_dp.reset()\n",
"\n",
"#implement following functions, then run the cell\n",
"def y1_equation_1(x, W, b):\n",
" raise NotImplementedError()\n",
" \n",
"def y1_equation_2(x, W, b):\n",
" raise NotImplementedError()\n",
"\n",
"def y_equation_3(x, W, b):\n",
" #use numpy.dot\n",
" raise NotImplementedError()\n",
"\n",
"def Y_equation_4(x, W, b):\n",
" #use numpy.dot\n",
" raise NotImplementedError()\n",
"\n",
"for X, t in mnist_dp:\n",
" n = 0\n",
" y1e1 = y1_equation_1(x[n], W, b)\n",
" y1e2 = y1_equation_2(x[n], W, b)\n",
" ye3 = y_equation_3(x[n], W, b)\n",
" Ye4 = Y_equation_4(x, W, b)\n",
"\n",
"print 'y1e1', y1e1\n",
"print 'y1e1', y1e1\n",
"print 'ye3', ye3\n",
"print 'Ye4', ye4\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## Exercise 2\n",
"\n",
"Modify the examples from Exercise 1 to perform **backward** propagation, that is, given $\\mathbf{y}$ (obtained in the previous step) and weight matrix $\\mathbf{W}$, project $\\mathbf{y}$ onto the input space $\\mathbf{x}$ (ignore or set to zero the biases towards $\\mathbf{x}$ in backward pass, and note, we are **not** trying to reconstruct the original $\\mathbf{x}$). Mathematically, we are interested in the following transformation: $\\mathbf{z}=\\mathbf{y}\\mathbf{W}^T$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***\n",
"## Exercise 3 (optional)\n",
"\n",
"In case you do not fully understand how matrix-vector and/or matrix-matrix products work, consider implementing `my_dot_mat_mat` function (you have been given `my_dot_vec_mat` code to look at as an example) which takes as the input the following arguments:\n",
"\n",
"* D-dimensional input vector $\\mathbf{x} = (x_1, x_2, \\ldots, x_D) $.\n",
"* Weight matrix $\\mathbf{W}\\in\\mathbb{R}^{D\\times K}$:\n",
"\n",
"and returns:\n",
"\n",
"* K-dimensional output vector $\\mathbf{y} = (y_1, \\ldots, y_K) $\n",
"\n",
"Your job is to write a variant that works in a mini-batch mode where both $\\mathbf{x}\\in\\mathbb{R}^{B\\times D}$ and $\\mathbf{y}\\in\\mathbb{R}^{B\\times K}$ are matrices in which each rows contain one of $B$ data-points from mini-batch (rather than $\\mathbf{x}\\in\\mathbb{R}^{D}$ and $\\mathbf{y}\\in\\mathbb{R}^{K}$)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def my_dot_vec_mat(x, W):\n",
" J = x.shape[0]\n",
" K = W.shape[1]\n",
" assert (J == W.shape[0]), (\n",
" \"Number of columns of x expected to \"\n",
" \" to be equal to the number of rows in \"\n",
" \"W, bot got shapes %s, %s\" % (x.shape, W.shape)\n",
" )\n",
" y = numpy.zeros((K,))\n",
" for k in xrange(0, K):\n",
" for j in xrange(0, J):\n",
" y[k] += x[j] * W[j,k]\n",
" \n",
" return y"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"irange = 0.1 #+-range from which we draw the random numbers\n",
"\n",
"x = random_generator.uniform(-irange, irange, (5,)) \n",
"W = random_generator.uniform(-irange, irange, (5,3)) \n",
"\n",
"y_my = my_dot_vec_mat(x, W)\n",
"y_np = numpy.dot(x, W)\n",
"\n",
"same = numpy.allclose(y_my, y_np)\n",
"\n",
"if same:\n",
" print 'Well done!'\n",
"else:\n",
" print 'Matrices are different:'\n",
" print 'y_my is: ', y_my\n",
" print 'y_np is: ', y_np"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def my_dot_mat_mat(x, W):\n",
" I = x.shape[0]\n",
" J = x.shape[1]\n",
" K = W.shape[1]\n",
" assert (J == W.shape[0]), (\n",
" \"Number of columns in of x expected to \"\n",
" \" to be the same as rows in W, got\"\n",
" )\n",
" #allocate the output container\n",
" y = numpy.zeros((I, K))\n",
" \n",
" #implement here matrix-matrix inner product here\n",
" raise NotImplementedError('Write me!')\n",
" \n",
" return y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Test whether you get comparable numbers to what numpy is producing:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"irange = 0.1 #+-range from which we draw the random numbers\n",
"\n",
"x = random_generator.uniform(-irange, irange, (2,5)) \n",
"W = random_generator.uniform(-irange, irange, (5,3)) \n",
"\n",
"y_my = my_dot_mat_mat(x, W)\n",
"y_np = numpy.dot(x, W)\n",
"\n",
"same = numpy.allclose(y_my, y_np)\n",
"\n",
"if same:\n",
" print 'Well done!'\n",
"else:\n",
" print 'Matrices are different:'\n",
" print 'y_my is: ', y_my\n",
" print 'y_np is: ', y_np"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we benchmark each approach (we do it in separate cells, as timeit currently can measure whole cell execuiton only)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#generate bit bigger matrices, to better evaluate timings\n",
"x = random_generator.uniform(-irange, irange, (10, 1000))\n",
"W = random_generator.uniform(-irange, irange, (1000, 100))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print 'my_dot timings:'\n",
"%timeit -n10 my_dot_mat_mat(x, W)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print 'numpy.dot timings:'\n",
"%timeit -n10 numpy.dot(x, W)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Optional section ends here**\n",
"***"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Iterative learning of linear models\n",
"\n",
"We will learn the model with stochastic gradient descent on N data-points using mean square error (MSE) loss function, which is defined as follows:\n",
"\n",
"(5) $\n",
"E = \\frac{1}{2} \\sum_{n=1}^N ||\\mathbf{y}^n - \\mathbf{t}^n||^2 = \\sum_{n=1}^N E^n \\\\\n",
" E^n = \\frac{1}{2} ||\\mathbf{y}^n - \\mathbf{t}^n||^2\n",
"$\n",
"\n",
"(6) $ E^n = \\frac{1}{2} \\sum_{k=1}^K (y_k^n - t_k^n)^2 $\n",
" \n",
"Hence, the gradient w.r.t (with respect to) the $r$ output y of the model is defined as, so called delta function, $\\delta_r$: \n",
"\n",
"(8) $\\frac{\\partial{E^n}}{\\partial{y_{r}}} = (y^n_r - t^n_r) = \\delta^n_r \\quad ; \\quad\n",
" \\delta^n_r = y^n_r - t^n_r \\\\\n",
" \\frac{\\partial{E}}{\\partial{y_{r}}} = \\sum_{n=1}^N \\frac{\\partial{E^n}}{\\partial{y_{r}}} = \\sum_{n=1}^N \\delta^n_r\n",
"$\n",
"\n",
"Similarly, using the above $\\delta^n_r$ one can express the gradient of the weight $w_{sr}$ (from the s-th input to the r-th output) for linear model and MSE cost as follows:\n",
"\n",
"(9) $\n",
" \\frac{\\partial{E^n}}{\\partial{w_{sr}}} = (y^n_r - t^n_r)x_s^n = \\delta^n_r x_s^n \\quad\\\\\n",
" \\frac{\\partial{E}}{\\partial{w_{sr}}} = \\sum_{n=1}^N \\frac{\\partial{E^n}}{\\partial{w_{rs}}} = \\sum_{n=1}^N \\delta^n_r x_s^n\n",
"$\n",
"\n",
"and the gradient for bias parameter at the $r$-th output is:\n",
"\n",
"(10) $\n",
" \\frac{\\partial{E}}{\\partial{b_{r}}} = \\sum_{n=1}^N \\frac{\\partial{E^n}}{\\partial{b_{r}}} = \\sum_{n=1}^N \\delta^n_r\n",
"$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"![Making Predictions](res/singleLayerNetPredict.png)\n",
" \n",
" * Input vector $\\mathbf{x} = (x_1, x_2, \\ldots, x_D) $\n",
" * Output scalar $y_1$\n",
" * Weight matrix $\\mathbf{W}$: $w_{ik}$ is the weight from input $x_i$ to output $y_k$. Note, here this is really a vector since a single scalar output, y_1.\n",
" * Scalar bias $b$ for the only output in our model \n",
" * Scalar target $t$ for the only output in out model\n",
" \n",
"First, ensure you can make use of the data provider (note, for training data has been normalised to zero mean and unit variance, hence different effective range than one can find in file):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from mlp.dataset import MetOfficeDataProvider\n",
"\n",
"modp = MetOfficeDataProvider(10, batch_size=10, max_num_batches=2, randomize=False)\n",
"\n",
"%precision 2\n",
"for x, t in modp:\n",
" print 'Observations: ', x\n",
" print 'To predict: ', t"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercise 4\n",
"\n",
"The below code implements a very simple variant of stochastic gradient descent for the rainfall prediction example. Your task is to implement 5 functions in the next cell and then run two next cells that 1) build sgd functions and 2) run the actual training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\n",
"#When implementing those, take into account the mini-batch case, for which one is\n",
"#expected to sum the errors for each example\n",
"\n",
"def fprop(x, W, b):\n",
" #code implementing eq. (3)\n",
" raise NotImplementedError('Write me!')\n",
"\n",
"def cost(y, t):\n",
" #Mean Square Error cost, equation (5)\n",
" raise NotImplementedError('Write me!')\n",
"\n",
"def cost_grad(y, t):\n",
" #Gradient of the cost w.r.t y equation (8)\n",
" raise NotImplementedError('Write me!')\n",
"\n",
"def cost_wrt_W(cost_grad, x):\n",
" #Gradient of the cost w.r.t W, equation (9)\n",
" raise NotImplementedError('Write me!')\n",
" \n",
"def cost_wrt_b(cost_grad):\n",
" #Gradient of the cost w.r.t to b, equation (10)\n",
" raise NotImplementedError('Write me!')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"\n",
"def sgd_epoch(data_provider, W, b, learning_rate):\n",
" mse_stats = []\n",
" \n",
" #get the minibatch of data\n",
" for x, t in data_provider:\n",
" \n",
" #1. get the estimate of y\n",
" y = fprop(x, W, b)\n",
"\n",
" #2. compute the loss function\n",
" tmp = cost(y, t)\n",
" mse_stats.append(tmp)\n",
" \n",
" #3. compute the grad of the cost w.r.t the output layer activation y\n",
" #i.e. how the cost changes when output y changes\n",
" cost_grad_deltas = cost_grad(y, t)\n",
"\n",
" #4. compute the gradients w.r.t model's parameters\n",
" grad_W = cost_wrt_W(cost_grad_deltas, x)\n",
" grad_b = cost_wrt_b(cost_grad_deltas)\n",
"\n",
" #4. Update the model, we update with the mean gradient\n",
" # over the minibatch, rather than sum of particular gradients\n",
" # in a minibatch, to do so we scale the learning rate by batch_size\n",
" batch_size = x.shape[0]\n",
" effect_learn_rate = learning_rate / batch_size\n",
"\n",
" W = W - effect_learn_rate * grad_W\n",
" b = b - effect_learn_rate * grad_b\n",
" \n",
" return W, b, numpy.mean(mse_stats)\n",
"\n",
"def sgd(data_provider, W, b, learning_rate=0.1, max_epochs=10):\n",
" \n",
" for epoch in xrange(0, max_epochs):\n",
" #reset the data provider\n",
" data_provider.reset()\n",
" \n",
" #train for one epoch\n",
" W, b, mean_cost = \\\n",
" sgd_epoch(data_provider, W, b, learning_rate)\n",
" \n",
" print \"MSE training cost after %d-th epoch is %f\" % (epoch + 1, mean_cost)\n",
" \n",
" return W, b\n",
" \n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"\n",
"#some hyper-parameters\n",
"window_size = 12\n",
"irange = 0.1\n",
"learning_rate = 0.01\n",
"max_epochs=40\n",
"\n",
"# note, while developing you can set max_num_batches to some positive number to limit\n",
"# the number of training data-points (you will get feedback faster)\n",
"mdp = MetOfficeDataProvider(window_size, batch_size=10, max_num_batches=-100, randomize=False)\n",
"\n",
"#initialise the parameters\n",
"W = random_generator.uniform(-irange, irange, (window_size, 1))\n",
"b = random_generator.uniform(-irange, irange, (1, ))\n",
"\n",
"#train the model\n",
"sgd(mdp, W, b, learning_rate=learning_rate, max_epochs=max_epochs)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## Exercise 5\n",
"\n",
"Modify the above prediction (regression) problem so the model makes a binary classification whether the the weather is going to be one of those \\{rainy, not-rainy} (look at slide 12 of the 2nd lecture)\n",
"\n",
"Tip: You need to introduce the following changes:\n",
"1. Modify `MetOfficeDataProvider` (for example, inherit from MetOfficeDataProvider to create a new class MetOfficeDataProviderBin) and modify `next()` function so it returns as `targets` either 0 (not-rainy - if the the amount of rain [before mean/variance normalisation] is equal to 0) or 1 (rainy -- otherwise).\n",
"2. Modify the functions from previous exercise so the fprop implements `sigmoid` on top of affine transform.\n",
"3. Modify cost function to binary cross-entropy\n",
"4. Make sure you compute the gradients correctly (as you have changed both the output and the cost)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}