{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## CIFAR-10 and CIFAR-100 datasets\n", "\n", "[CIFAR-10 and CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) are a pair of image classification datasets collected by collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. They are labelled subsets of the much larger [80 million tiny images](dataset). They are a common benchmark task for image classification - a list of current accuracy benchmarks for both data sets are maintained by Rodrigo Benenson [here](http://rodrigob.github.io/are_we_there_yet/build/).\n", "\n", "As the name suggests, CIFAR-10 has images in 10 classes:\n", "\n", " airplane\n", " automobile\n", " bird \n", " cat\n", " deer\n", " dog\n", " frog\n", " horse\n", " ship\n", " truck\n", "\n", "with 6000 images per class for an overall dataset size of 60000. Each image has three (RGB) colour channels and pixel dimension 32×32, corresponding to a total dimension per input image of 3×32×32=3072. For each colour channel the input values have been normalised to the range [0, 1].\n", "\n", "CIFAR-100 has images of identical dimensions to CIFAR-10 but rather than 10 classes they are instead split across 100 fine-grained classes (and 20 coarser 'superclasses' comprising multiple finer classes):\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
SuperclassClasses
aquatic mammalsbeaver, dolphin, otter, seal, whale
fishaquarium fish, flatfish, ray, shark, trout
flowersorchids, poppies, roses, sunflowers, tulips
food containersbottles, bowls, cans, cups, plates
fruit and vegetablesapples, mushrooms, oranges, pears, sweet peppers
household electrical devicesclock, computer keyboard, lamp, telephone, television
household furniturebed, chair, couch, table, wardrobe
insectsbee, beetle, butterfly, caterpillar, cockroach
large carnivoresbear, leopard, lion, tiger, wolf
large man-made outdoor thingsbridge, castle, house, road, skyscraper
large natural outdoor scenescloud, forest, mountain, plain, sea
large omnivores and herbivorescamel, cattle, chimpanzee, elephant, kangaroo
medium-sized mammalsfox, porcupine, possum, raccoon, skunk
non-insect invertebratescrab, lobster, snail, spider, worm
peoplebaby, boy, girl, man, woman
reptilescrocodile, dinosaur, lizard, snake, turtle
small mammalshamster, mouse, rabbit, shrew, squirrel
treesmaple, oak, palm, pine, willow
vehicles 1bicycle, bus, motorcycle, pickup truck, train
vehicles 2lawn-mower, rocket, streetcar, tank, tractor
\n", "\n", "Each class has 600 examples in it, giving an overall dataset size of 60000 i.e. the same as CIFAR-10.\n", "\n", "Both CIFAR-10 and CIFAR-100 have standard splits into 50000 training examples and 10000 test examples. For CIFAR-100 as there is an optional Kaggle competition (see below) scored on predictions on the test set, we have used a non-standard assignation of examples to test and training set and only provided the inputs (and not target labels) for the 10000 examples chosen for the test set. \n", "\n", "For CIFAR-10 the 10000 test set examples have labels provided: to avoid any accidental over-fitting to the test set **you should only use these for the final evaluation of your model(s)**. If you repeatedly evaluate models on the test set during model development it is easy to end up indirectly fitting to the test labels - for those who have not already read it see this [excellent cautionary note from the MLPR notes by Iain Murray](http://www.inf.ed.ac.uk/teaching/courses/mlpr/2016/notes/w2a_train_test_val.html#warning-dont-fool-yourself-or-make-a-fool-of-yourself). \n", "\n", "For both CIFAR-10 and CIFAR-100, the remaining 50000 non-test examples have been split in to a 40000 example training dataset and a 10000 example validation dataset, each with target labels provided. If you wish to use a more complex cross-fold validation scheme you may want to combine these two portions of the dataset and define your own functions for separating out a validation set.\n", "\n", "Data provider classes for both CIFAR-10 and CIFAR-100 are available in the `mlp.data_providers` module. Both have similar behaviour to the `MNISTDataProvider` used extensively last semester. A `which_set` argument can be used to specify whether to return a data provided for the training dataset (`which_set='train'`) or validation dataset (`which_set='valid'`).\n", "\n", "The CIFAR-100 data provider also takes an optional `use_coarse_targets` argument in its constructor. By default this is set to `False` and the targets returned by the data provider correspond to 1-of-K encoded binary vectors for the 100 fine-grained object classes. If `use_coarse_targets=True` then instead the data provider will return 1-of-K encoded binary vector targets for the 20 coarse-grained superclasses associated with each input instead.\n", "\n", "Both data provider classes provide a `label_map` attribute which is a list of strings which are the class labels corresponding to the integer targets (i.e. prior to conversion to a 1-of-K encoded binary vector)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Accessing the CIFAR-10 and CIFAR-100 data\n", "\n", "Before using the data provider objects you will need to make sure the data files are accessible to the `mlp` package by existing under the directory specified by the `MLP_DATA_DIR` path.\n", "\n", "The data is available as compressed NumPy `.npz` files\n", "\n", " cifar-10-train.npz 235MB\n", " cifar-10-valid.npz 59MB\n", " cifar-10-test-inputs.npz 59MB\n", " cifar-10-test-targets.npz 10KB\n", " cifar-100-train.npz 235MB\n", " cifar-100-valid.npz 59MB\n", " cifar-100-test-inputs.npz 59MB\n", "\n", "\n", "in the AFS directory `/afs/inf.ed.ac.uk/group/teaching/mlp/data`.\n", "\n", "If you are working on DICE one option is to redefine your `MLP_DATA_DIR` to directly point to the shared AFS data directory by editing the `env_vars.sh` start up file for your environment. This will avoid using up your DICE quota by storing the data files in your homespace but may involve slower initial loading of the data on initialising the data providers if many people are trying access the same files at once. The environment variable can be redefined by running\n", "\n", "```\n", "gedit ~/miniconda2/envs/mlp/etc/conda/activate.d/env_vars.sh\n", "```\n", "\n", "in a terminal window (assuming you installed `miniconda2` to your home directory), and changing the line\n", "\n", "```\n", "export MLP_DATA_DIR=$HOME/mlpractical/data\n", "```\n", "\n", "to\n", "\n", "```\n", "export MLP_DATA_DIR=\"/afs/inf.ed.ac.uk/group/teaching/mlp/data\"\n", "```\n", "\n", "and then saving and closing the editor. You will need reload the `mlp` environment using `source activate mlp` and restart the Jupyter notebook server in the reloaded environment for the new environment variable definition to be available.\n", "\n", "For those working on DICE who have sufficient quota remaining or those using there own machine, an alternative option is to copy the data files in to your local `mlp/data` directory (or wherever your `MLP_DATA_DIR` environment variable currently points to if different). \n", "\n", "\n", "Assuming your local `mlpractical` repository is in your home directory you should be able to copy the required files on DICE by running\n", "\n", "```\n", "cp /afs/inf.ed.ac.uk/group/teaching/mlp/data/cifar*.npz ~/mlpractical/data\n", "```\n", "\n", "On a non-DICE machine, you will need to either [set up local access to AFS](http://computing.help.inf.ed.ac.uk/informatics-filesystem), use a remote file transfer client like `scp` or you can alternatively download the files using the iFile web interface [here](https://ifile.inf.ed.ac.uk/?path=%2Fafs%2Finf.ed.ac.uk%2Fgroup%2Fteaching%2Fmlp%2Fdata&goChange=Go) (requires DICE credentials).\n", "\n", "As some of the files are quite large you may wish to copy only those you are using currently (e.g. only the files for one of the two tasks) to your local filespace to avoid filling up your quota. The `cifar-100-test-inputs.npz` file will only be needed by those intending to enter the associated optional Kaggle competition." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example two-layer classifier models\n", "\n", "Below example code is given for creating instances of the CIFAR-10 and CIFAR-100 data provider objects and using them to train simple two-layer feedforward network models with rectified linear activations in TensorFlow. You may wish to use this code as a starting point for your own experiments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import os\n", "import tensorflow as tf\n", "import numpy as np\n", "from mlp.data_providers import CIFAR10DataProvider, CIFAR100DataProvider\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CIFAR-10" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "train_data = CIFAR10DataProvider('train', batch_size=50)\n", "valid_data = CIFAR10DataProvider('valid', batch_size=50)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def fully_connected_layer(inputs, input_dim, output_dim, nonlinearity=tf.nn.relu):\n", " weights = tf.Variable(\n", " tf.truncated_normal(\n", " [input_dim, output_dim], stddev=2. / (input_dim + output_dim)**0.5), \n", " 'weights')\n", " biases = tf.Variable(tf.zeros([output_dim]), 'biases')\n", " outputs = nonlinearity(tf.matmul(inputs, weights) + biases)\n", " return outputs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "inputs = tf.placeholder(tf.float32, [None, train_data.inputs.shape[1]], 'inputs')\n", "targets = tf.placeholder(tf.float32, [None, train_data.num_classes], 'targets')\n", "num_hidden = 200\n", "\n", "with tf.name_scope('fc-layer-1'):\n", " hidden_1 = fully_connected_layer(inputs, train_data.inputs.shape[1], num_hidden)\n", "with tf.name_scope('output-layer'):\n", " outputs = fully_connected_layer(hidden_1, num_hidden, train_data.num_classes, tf.identity)\n", "\n", "with tf.name_scope('error'):\n", " error = tf.reduce_mean(\n", " tf.nn.softmax_cross_entropy_with_logits(outputs, targets))\n", "with tf.name_scope('accuracy'):\n", " accuracy = tf.reduce_mean(tf.cast(\n", " tf.equal(tf.argmax(outputs, 1), tf.argmax(targets, 1)), \n", " tf.float32))\n", "\n", "with tf.name_scope('train'):\n", " train_step = tf.train.AdamOptimizer().minimize(error)\n", " \n", "init = tf.global_variables_initializer()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "with tf.Session() as sess:\n", " sess.run(init)\n", " for e in range(10):\n", " running_error = 0.\n", " running_accuracy = 0.\n", " for input_batch, target_batch in train_data:\n", " _, batch_error, batch_acc = sess.run(\n", " [train_step, error, accuracy], \n", " feed_dict={inputs: input_batch, targets: target_batch})\n", " running_error += batch_error\n", " running_accuracy += batch_acc\n", " running_error /= train_data.num_batches\n", " running_accuracy /= train_data.num_batches\n", " print('End of epoch {0:02d}: err(train)={1:.2f} acc(train)={2:.2f}'\n", " .format(e + 1, running_error, running_accuracy))\n", " if (e + 1) % 5 == 0:\n", " valid_error = 0.\n", " valid_accuracy = 0.\n", " for input_batch, target_batch in valid_data:\n", " batch_error, batch_acc = sess.run(\n", " [error, accuracy], \n", " feed_dict={inputs: input_batch, targets: target_batch})\n", " valid_error += batch_error\n", " valid_accuracy += batch_acc\n", " valid_error /= valid_data.num_batches\n", " valid_accuracy /= valid_data.num_batches\n", " print(' err(valid)={0:.2f} acc(valid)={1:.2f}'\n", " .format(valid_error, valid_accuracy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CIFAR-100" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "train_data = CIFAR100DataProvider('train', batch_size=50)\n", "valid_data = CIFAR100DataProvider('valid', batch_size=50)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "tf.reset_default_graph()\n", "\n", "inputs = tf.placeholder(tf.float32, [None, train_data.inputs.shape[1]], 'inputs')\n", "targets = tf.placeholder(tf.float32, [None, train_data.num_classes], 'targets')\n", "num_hidden = 200\n", "\n", "with tf.name_scope('fc-layer-1'):\n", " hidden_1 = fully_connected_layer(inputs, train_data.inputs.shape[1], num_hidden)\n", "with tf.name_scope('output-layer'):\n", " outputs = fully_connected_layer(hidden_1, num_hidden, train_data.num_classes, tf.identity)\n", "\n", "with tf.name_scope('error'):\n", " error = tf.reduce_mean(\n", " tf.nn.softmax_cross_entropy_with_logits(outputs, targets))\n", "with tf.name_scope('accuracy'):\n", " accuracy = tf.reduce_mean(tf.cast(\n", " tf.equal(tf.argmax(outputs, 1), tf.argmax(targets, 1)), \n", " tf.float32))\n", "\n", "with tf.name_scope('train'):\n", " train_step = tf.train.AdamOptimizer().minimize(error)\n", " \n", "init = tf.global_variables_initializer()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sess = tf.Session()\n", "sess.run(init)\n", "for e in range(10):\n", " running_error = 0.\n", " running_accuracy = 0.\n", " for input_batch, target_batch in train_data:\n", " _, batch_error, batch_acc = sess.run(\n", " [train_step, error, accuracy], \n", " feed_dict={inputs: input_batch, targets: target_batch})\n", " running_error += batch_error\n", " running_accuracy += batch_acc\n", " running_error /= train_data.num_batches\n", " running_accuracy /= train_data.num_batches\n", " print('End of epoch {0:02d}: err(train)={1:.2f} acc(train)={2:.2f}'\n", " .format(e + 1, running_error, running_accuracy))\n", " if (e + 1) % 5 == 0:\n", " valid_error = 0.\n", " valid_accuracy = 0.\n", " for input_batch, target_batch in valid_data:\n", " batch_error, batch_acc = sess.run(\n", " [error, accuracy], \n", " feed_dict={inputs: input_batch, targets: target_batch})\n", " valid_error += batch_error\n", " valid_accuracy += batch_acc\n", " valid_error /= valid_data.num_batches\n", " valid_accuracy /= valid_data.num_batches\n", " print(' err(valid)={0:.2f} acc(valid)={1:.2f}'\n", " .format(valid_error, valid_accuracy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predicting test data classes and creating a Kaggle submission file\n", "\n", "An optional [Kaggle in Class](https://inclass.kaggle.com/c/ml2016-7-cifar-100) competition (see email for invite link, you will need to sign-up with a `ed.ac.uk` email address to be able to enter) is being run on the CIFAR-100 (fine-grained) classification task. The scores for the competition are calculated by calculating the proportion of classes correctly predicted on the test set inputs (for which no class labels are provided). Half of the 10000 test inputs are used to calculate a public leaderboard score which will be visible while the competition is in progress and the other half are used to compute the private leaderboard score which will only be unveiled at the end of the competition. Each entrant can make up to two submissions of predictions each day during the competition.\n", "\n", "The code and helper function below illustrate how to use the predicted outputs of the TensorFlow network model we just trained to create a submission file which can be uploaded to Kaggle. The required format of the submission file is a `.csv` (Comma Separated Variable) file with two columns: the first is the integer index of the test input in the array in the provided data file (i.e. first row 0, second row 1 and so on) and the second column the corresponding predicted class label as an integer between 0 and 99 inclusive. The predictions must be preceded by a header line as in the following example\n", "\n", "```\n", "Id,Class\n", "0,81\n", "1,35\n", "2,12\n", "...\n", "```\n", "\n", "Integer class label predictions can be computed from the class probability outputs of the model by performing an `argmax` operation along the last dimension." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "test_inputs = np.load(os.path.join(os.environ['MLP_DATA_DIR'], 'cifar-100-test-inputs.npz'))['inputs']\n", "test_predictions = sess.run(tf.nn.softmax(outputs), feed_dict={inputs: test_inputs})" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def create_kaggle_submission_file(predictions, output_file, overwrite=False):\n", " if predictions.shape != (10000, 100):\n", " raise ValueError('predictions should be an array of shape (10000, 25).')\n", " if not (np.all(predictions >= 0.) and \n", " np.all(predictions <= 1.)):\n", " raise ValueError('predictions should be an array of probabilities in [0, 1].')\n", " if not np.allclose(predictions.sum(-1), 1):\n", " raise ValueError('predictions rows should sum to one.')\n", " if os.path.exists(output_file) and not overwrite:\n", " raise ValueError('File already exists at {0}'.format(output_file))\n", " pred_classes = predictions.argmax(-1)\n", " ids = np.arange(pred_classes.shape[0])\n", " np.savetxt(output_file, np.column_stack([ids, pred_classes]), fmt='%d',\n", " delimiter=',', header='Id,Class', comments='')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "create_kaggle_submission_file(test_predictions, 'cifar-100-example-network-submission.csv', True)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [conda env:mlp]", "language": "python", "name": "conda-env-mlp-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 1 }