mlpractical/01_Linear_Models.ipynb

282 lines
9.1 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction\n",
"\n",
"This tutorial is about linear transforms - a basic building block of many, including deep learning, models.\n",
"\n",
"# Short recap and syncing repositories\n",
"\n",
"Before you proceed onwards, remember to activate you virtual environments so you can use the software you installed last week as well as run the notebooks in interactive mode, no through github.com website.\n",
"\n",
"## Virtual environments\n",
"\n",
"To activate virtual environment:\n",
" * If you were on Tuesday/Wednesday group type `activate_mlp` or `source ~/mlpractical/venv/bin/activate`\n",
" * If you were on Monday group:\n",
" + and if you have chosen **comfy** way type: workon mlpractival\n",
" + and if you have chosen **generic** way, `source` your virutal environment using `source` and specyfing the path to the activate script (you need to localise it yourself, there were not any general recommendations w.r.t dir structure and people have installed it in different places, usually somewhere in the home directories. If you cannot easily find it by yourself, use something like: `find . -iname activate` ):\n",
"\n",
"## On Synchronising repositories\n",
"\n",
"I started writing this, but do not think giving students a choice is a good way to progess, the most painless way to follow would be to ask them to stash their changes (with some meaningful message) and work on the clean updated repository. This way one can always (temporarily) recover the work once needed but everyone starts smoothly the next lab. We do not want to help anyone how to resolve the conflicts...\n",
"\n",
"Enter your git mlp repository you set up last week (i.e. ~/mlpractical/repo-mlp) and depending on how you want to proceed you either can:\n",
" 1. Overridde some changes you have made (both in the notebooks and/or in the code if you happen to modify parts that were updated by us) with the code we have provided for this lab\n",
" 2. Try to merge your code with ours (for example, if you want to use `MetOfficeDataProvider` you have written)\n",
" \n",
"Our recommendation is, you should at least keep the progress in the notebooks (so you can peek some details when needed)\n",
" \n",
"```\n",
"git pull\n",
"```\n",
"\n",
"## Default Synchronising Strategy\n",
"\n",
"Need to think/discuss this."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Linear and Affine Transforms\n",
"\n",
"Depending on the required level of details, one may need to. The basis of all linear models is so called affine transform, that is the transform that implements some (linear) rotation of some input points and shift (translation) them. Denote by $\\vec x$ some input vector, then the affine transform is defined as follows:\n",
"\n",
"![Making Predictions](res/singleLayerNetWts-1.png)\n",
"\n",
"$\n",
"\\begin{equation}\n",
" \\mathbf y=\\mathbf W \\mathbf x + \\mathbf b\n",
"\\end{equation}\n",
"$\n",
"\n",
"<b>Note:</b> the bias term can be incorporated as an additional column in the weight matrix, though in this tutorials we will use a separate variable to for this purpose.\n",
"\n",
"An $i$th element of vecotr $\\mathbf y$ is hence computed as:\n",
"\n",
"$\n",
"\\begin{equation}\n",
" y_i=\\mathbf w_i \\mathbf x + b_i\n",
"\\end{equation}\n",
"$\n",
"\n",
"where $\\mathbf w_i$ is the $i$th row of $\\mathbf W$\n",
"\n",
"$\n",
"\\begin{equation}\n",
" y_i=\\sum_j w_{ji}x_j + b_i\n",
"\\end{equation}\n",
"$\n",
"\n",
"???\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ 0.06875593 -0.69616488 0.08823301 0.34533413 -0.22129962]\n"
]
}
],
"source": [
"import numpy\n",
"x=numpy.random.uniform(-1,1,(4,)); \n",
"W=numpy.random.uniform(-1,1,(5,4)); \n",
"y=numpy.dot(W,x);\n",
"print y"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[ 0.63711 0.11566944 0.74416104]\n",
" [-0.01335825 0.46206922 -0.1109265 ]\n",
" [-0.37523063 -0.06755371 0.04352121]\n",
" [ 0.25885831 -0.53660826 -0.40905639]]\n"
]
}
],
"source": [
"def my_dot(x, W, b):\n",
" y = numpy.zeros_like((x.shape[0], W.shape[1]))\n",
" raise NotImplementedError('Write me!')\n",
" return y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ 0 1 2 3 4 5 6 7 8 9 10]\n"
]
}
],
"source": [
"\n",
"for itr in xrange(0,100):\n",
" my_dot(W,x)\n",
" \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Iterative learning of linear models\n",
"\n",
"We will learn the model with (batch for now) gradient descent.\n",
"\n",
"\n",
"## Running example\n",
"\n",
"![Making Predictions](res/singleLayerNetPredict.png)\n",
" \n",
"\n",
" * Input vector $\\mathbf{x} = (x_1, x_1, \\ldots, x_d)^T $\n",
" * Output vector $\\mathbf{y} = (y_1, \\ldots, y_K)^T $\n",
" * Weight matrix $\\mathbf{W}$: $w_{ki}$ is the weight from input $x_i$ to output $y_k$\n",
" * Bias $w_{k0}$ is the bias for output $k$\n",
" * Targets vector $\\mathbf{t} = (t_1, \\ldots, t_K)^T $\n",
"\n",
"\n",
"$\n",
" y_k = \\sum_{i=1}^d w_{ki} x_i + w_{k0}\n",
"$\n",
"\n",
"If we define $x_0=1$ we can simplify the above to\n",
"\n",
"$\n",
" y_k = \\sum_{i=0}^d w_{ki} x_i \\quad ; \\quad \\mathbf{y} = \\mathbf{Wx}\n",
"$\n",
"\n",
"$\n",
"E = \\frac{1}{2} \\sum_{n=1}^N ||\\mathbf{y}^n - \\mathbf{t}^n||^2 = \\sum_{n=1}^N E^n \\\\\n",
" E^n = \\frac{1}{2} ||\\mathbf{y}^n - \\mathbf{t}^n||^2\n",
"$\n",
"\n",
" $ E^n = \\frac{1}{2} \\sum_{k=1}^K (y_k^n - t_k^n)^2 $\n",
" set $\\mathbf{W}$ to minimise $E$ given the training set\n",
" \n",
"$\n",
" E^n = \\frac{1}{2} \\sum_{k=1}^K (y^n_k - t^n_k)^2 \n",
" = \\frac{1}{2} \\sum_{k=1}^K \\left( \\sum_{i=0}^d w_{ki} x^n_i - t^n_k \\right)^2 \\\\\n",
" \\pderiv{E^n}{w_{rs}} = (y^n_r - t^n_r)x_s^n = \\delta^n_r x_s^n \\quad ; \\quad\n",
" \\delta^n_r = y^n_r - t^n_r \\\\\n",
" \\pderiv{E}{w_{rs}} = \\sum_{n=1}^N \\pderiv{E^n}{w_{rs}} = \\sum_{n=1}^N \\delta^n_r x_s^n\n",
"$\n",
"\n",
"\n",
"\\begin{algorithmic}[1]\n",
" \\Procedure{gradientDescentTraining}{$\\mvec{X}, \\mvec{T},\n",
" \\mvec{W}$}\n",
" \\State initialize $\\mvec{W}$ to small random numbers\n",
"% \\State randomize order of training examples in $\\mvec{X}\n",
" \\While{not converged}\n",
" \\State for all $k,i$: $\\Delta w_{ki} \\gets 0$\n",
" \\For{$n \\gets 1,N$}\n",
" \\For{$k \\gets 1,K$}\n",
" \\State $y_k^n \\gets \\sum_{i=0}^d w_{ki} x_{ki}^n$\n",
" \\State $\\delta_k^n \\gets y_k^n - t_k^n$\n",
" \\For{$i \\gets 1,d$}\n",
" \\State $\\Delta w_{ki} \\gets \\Delta w_{ki} + \\delta_k^n \\cdot x_i^n$\n",
" \\EndFor\n",
" \\EndFor\n",
" \\EndFor\n",
" \\State for all $k,i$: $w_{ki} \\gets w_{ki} - \\eta \\cdot \\Delta w_{ki}$\n",
" \\EndWhile\n",
" \\EndProcedure\n",
"\\end{algorithmic}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Excercises"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Fun Stuff\n",
"\n",
"So what on can do with linear transform, and what are the properties of those?\n",
"\n",
"Exercise, show, the LT is invertible, basically, solve the equation:\n",
"\n",
"y=Wx+b, given y (transformed image), find such x that is the same as the original one."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.9"
}
},
"nbformat": 4,
"nbformat_minor": 0
}