diff --git a/notebooks/09a_Object_recognition_with_CIFAR-10_and_CIFAR-100.ipynb b/notebooks/09a_Object_recognition_with_CIFAR-10_and_CIFAR-100.ipynb index 4cefe09..475ade7 100644 --- a/notebooks/09a_Object_recognition_with_CIFAR-10_and_CIFAR-100.ipynb +++ b/notebooks/09a_Object_recognition_with_CIFAR-10_and_CIFAR-100.ipynb @@ -27,7 +27,7 @@ "\n", "> airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck\n", "\n", - "with 6000 images per class for an overall dataset size of 60000. Each image has three (RGB) color channels and pixel dimension 32×32, corresponding to a total dimension per input image of 3×32×32=3072.\n", + "with 6000 images per class for an overall dataset size of 60000. Each image has three (RGB) colour channels and pixel dimension 32×32, corresponding to a total dimension per input image of 3×32×32=3072. For each colour channel the input values have been normalised to the range [0, 1].\n", "\n", "CIFAR-100 has images of identical dimensions to CIFAR-10 but rather than 10 classes they are instead split across 100 fine-grained classes (and 20 coarser 'superclasses' comprising multiple finer classes):\n", "\n", @@ -126,6 +126,8 @@ "\n", "The CIFAR-100 data provider also takes an optional `use_coarse_targets` argument in its constructor. By default this is set to `False` and the targets returned by the data provider correspond to 1-of-K encoded binary vectors for the 100 fine-grained object classes. If `use_coarse_targets=True` then instead the data provider will return 1-of-K encoded binary vector targets for the 20 coarse-grained superclasses associated with each input instead.\n", "\n", + "Both data provider classes provide a `label_map` attribute which is a list of strings which are the class labels corresponding to the integer targets (i.e. prior to conversion to a 1-of-K encoded binary vector).\n", + "\n", "Below example code is given for creating instances of the CIFAR-10 and CIFAR-100 data provider objects and using them to train simple two-layer feedforward network models with rectified linear activations in TensorFlow. You may wish to use this code as a starting point for your own experiments." ] },