diff --git a/README.md b/README.md index 058531e..c446b57 100644 --- a/README.md +++ b/README.md @@ -5,31 +5,53 @@ This repository contains the code for the University of Edinburgh [School of Inf This assignment-based course is focused on the implementation and evaluation of machine learning systems. Students who do this course will have experience in the design, implementation, training, and evaluation of machine learning systems. The code in this repository is split into: - - * a Python package `mlp`, a [NumPy](http://www.numpy.org/) based neural network package designed specifically for the course that students will implement parts of and extend during the course labs and assignments, - * a series of [Jupyter](http://jupyter.org/) notebooks in the `notebooks` directory containing explanatory material and coding exercises to be completed during the course labs. - +1. notebooks: + 1. Introduction_to_tensorflow: Introduces students to the basics of tensorflow and lower level operations. + 2. Introduction_to_tf_mlp_repo: Introduces students to the high level functionality of this repo and how one + could run an experiment. The code is full of comments and documentation so you should spend more time + reading and understanding the code by running simple experiments and changing pieces of code to see the impact + on the system. +2. utils: + 1. network_summary: Provides utilities with which one can get network summaries, such as the number of parameters and names of layers. + 2. parser_utils which are used to parse arguments passed to the training scripts. + 3. storage, which is responsible for storing network statistics. +3. data_providers.py : Provides the data providers for training, validation and testing. +4. network_architectures.py: Defines the network architectures. We provide VGGNet as an example. +5. network_builder.py: Builds the tensorflow computation graph. In more detail, it builds the losses, tensorflow summaries and training operations. +6. network_trainer.py: Runs an experiment, composed of training, validation and testing. It is setup to use arguments such that one can easily write multiple bash scripts with different hyperparameters and run experiments very quickly with minimal code changes. + + ## Getting set up Detailed instructions for setting up a development environment for the course are given in [this file](notes/environment-set-up.md). Students doing the course will spend part of the first lab getting their own environment set up. - -## Frequent Issues/Solutions - -Don’t forget that from your /mlpractica/l folder you should first do +Once you have setup the basic environment then to install the requirements for the tf_mlp repo simply run: ``` -git status #to check whether there are any changes in your local branch. If there are, you need to do: -git add “path /to/file” -git commit -m “some message” +pip install -r requirements.txt +``` +For CPU tensorflow and +``` +pip install -r requirements_gpu.txt +``` +for GPU tensorflow. + +If you install the wrong version of tensorflow simply run + +``` +pip uninstall $tensorflow_to_uninstall +``` +replacing $tensorflow_to_uninstall with the tensorflow you want to install and then install the correct one +using pip install as normally done. + +## Additional Packages + +For the tf_mlp you are required to install either the tensorflow-1.4.1 package for CPU users or the tensorflow_gpu-1.4.1 for GPU users. Both of these can easily be installed via pip using: + +``` +pip install tensorflow ``` -Only if this is OK, you can run -``` -git checkout mlp2017-8/lab[n] -``` -Related to MLP module not found error: -Another thing is to make sure you have you MLP_DATA_DIR path correctly set. You can check this by typing -```echo $MLP_DATA_DIR``` -in the command line. If this is not set up, you need to follow the instructions on the set-up-environment to get going. +or -Finally, please make sure you have run -```python setup.py develop``` +``` +pip install tensorflow_gpu +``` diff --git a/cifar100_network_trainer.py b/cifar100_network_trainer.py new file mode 100644 index 0000000..ab6a227 --- /dev/null +++ b/cifar100_network_trainer.py @@ -0,0 +1,181 @@ +import argparse +import numpy as np +import tensorflow as tf +import tqdm +from data_providers import CIFAR100DataProvider +from network_builder import ClassifierNetworkGraph +from utils.parser_utils import ParserClass +from utils.storage import build_experiment_folder, save_statistics, get_best_validation_model_statistics + +tf.reset_default_graph() # resets any previous graphs to clear memory +parser = argparse.ArgumentParser(description='Welcome to CNN experiments script') # generates an argument parser +parser_extractor = ParserClass(parser=parser) # creates a parser class to process the parsed input + +batch_size, seed, epochs, logs_path, continue_from_epoch, tensorboard_enable, batch_norm, \ +strided_dim_reduction, experiment_prefix, dropout_rate_value = parser_extractor.get_argument_variables() +# returns a list of objects that contain +# our parsed input + +experiment_name = "experiment_{}_batch_size_{}_bn_{}_mp_{}".format(experiment_prefix, + batch_size, batch_norm, + strided_dim_reduction) +# generate experiment name + +rng = np.random.RandomState(seed=seed) # set seed + +train_data = CIFAR100DataProvider(which_set="train", batch_size=batch_size, rng=rng, random_sampling=True) +val_data = CIFAR100DataProvider(which_set="valid", batch_size=batch_size, rng=rng) +test_data = CIFAR100DataProvider(which_set="test", batch_size=batch_size, rng=rng) +# setup our data providers + +print("Running {}".format(experiment_name)) +print("Starting from epoch {}".format(continue_from_epoch)) + +saved_models_filepath, logs_filepath = build_experiment_folder(experiment_name, logs_path) # generate experiment dir + +# Placeholder setup +data_inputs = tf.placeholder(tf.float32, [batch_size, train_data.inputs.shape[1], train_data.inputs.shape[2], + train_data.inputs.shape[3]], 'data-inputs') +data_targets = tf.placeholder(tf.int32, [batch_size], 'data-targets') + +training_phase = tf.placeholder(tf.bool, name='training-flag') +rotate_data = tf.placeholder(tf.bool, name='rotate-flag') +dropout_rate = tf.placeholder(tf.float32, name='dropout-prob') + +classifier_network = ClassifierNetworkGraph(input_x=data_inputs, target_placeholder=data_targets, + dropout_rate=dropout_rate, batch_size=batch_size, + n_classes=train_data.num_classes, + is_training=training_phase, augment_rotate_flag=rotate_data, + strided_dim_reduction=strided_dim_reduction, + use_batch_normalization=batch_norm) # initialize our computational graph + +if continue_from_epoch == -1: # if this is a new experiment and not continuation of a previous one then generate a new + # statistics file + save_statistics(logs_filepath, "result_summary_statistics", ["epoch", "train_c_loss", "train_c_accuracy", + "val_c_loss", "val_c_accuracy", + "test_c_loss", "test_c_accuracy"], create=True) + +start_epoch = continue_from_epoch if continue_from_epoch != -1 else 0 # if new experiment start from 0 otherwise +# continue where left off + +summary_op, losses_ops, c_error_opt_op = classifier_network.init_train() # get graph operations (ops) + +total_train_batches = train_data.num_batches +total_val_batches = val_data.num_batches +total_test_batches = test_data.num_batches + +if tensorboard_enable: + print("saved tensorboard file at", logs_filepath) + writer = tf.summary.FileWriter(logs_filepath, graph=tf.get_default_graph()) + +init = tf.global_variables_initializer() # initialization op for the graph + +with tf.Session() as sess: + sess.run(init) # actually running the initialization op + train_saver = tf.train.Saver() # saver object that will save our graph so we can reload it later for continuation of + val_saver = tf.train.Saver() + best_val_accuracy = 0. + best_epoch = 0 + # training or inference + + if continue_from_epoch != -1: + train_saver.restore(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, + continue_from_epoch)) # restore previous graph to continue operations + best_val_accuracy, best_epoch = get_best_validation_model_statistics(logs_filepath, "result_summary_statistics") + print(best_val_accuracy, best_epoch) + + with tqdm.tqdm(total=epochs - start_epoch) as epoch_pbar: + for e in range(start_epoch, epochs): + total_c_loss = 0. + total_accuracy = 0. + with tqdm.tqdm(total=total_train_batches) as pbar_train: + for batch_idx, (x_batch, y_batch) in enumerate(train_data): + iter_id = e * total_train_batches + batch_idx + _, c_loss_value, acc = sess.run( + [c_error_opt_op, losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + # Here we execute the c_error_opt_op which trains the network and also the ops that compute the + # loss and accuracy, we save those in _, c_loss_value and acc respectively. + total_c_loss += c_loss_value # add loss of current iter to sum + total_accuracy += acc # add acc of current iter to sum + + iter_out = "iter_num: {}, train_loss: {}, train_accuracy: {}".format(iter_id, + total_c_loss / (batch_idx + 1), + total_accuracy / ( + batch_idx + 1)) # show + # iter statistics using running averages of previous iter within this epoch + pbar_train.set_description(iter_out) + pbar_train.update(1) + if tensorboard_enable and batch_idx % 25 == 0: # save tensorboard summary every 25 iterations + _summary = sess.run( + summary_op, + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + writer.add_summary(_summary, global_step=iter_id) + + total_c_loss /= total_train_batches # compute mean of los + total_accuracy /= total_train_batches # compute mean of accuracy + + save_path = train_saver.save(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + # save graph and weights + print("Saved current model at", save_path) + + total_val_c_loss = 0. + total_val_accuracy = 0. # run validation stage, note how training_phase placeholder is set to False + # and that we do not run the c_error_opt_op which runs gradient descent, but instead only call the loss ops + # to collect losses on the validation set + with tqdm.tqdm(total=total_val_batches) as pbar_val: + for batch_idx, (x_batch, y_batch) in enumerate(val_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_val_c_loss += c_loss_value + total_val_accuracy += acc + iter_out = "val_loss: {}, val_accuracy: {}".format(total_val_c_loss / (batch_idx + 1), + total_val_accuracy / (batch_idx + 1)) + pbar_val.set_description(iter_out) + pbar_val.update(1) + + total_val_c_loss /= total_val_batches + total_val_accuracy /= total_val_batches + + if best_val_accuracy < total_val_accuracy: # check if val acc better than the previous best and if + # so save current as best and save the model as the best validation model to be used on the test set + # after the final epoch + best_val_accuracy = total_val_accuracy + best_epoch = e + save_path = val_saver.save(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + print("Saved best validation score model at", save_path) + + epoch_pbar.update(1) + # save statistics of this epoch, train and val without test set performance + save_statistics(logs_filepath, "result_summary_statistics", + [e, total_c_loss, total_accuracy, total_val_c_loss, total_val_accuracy, + -1, -1]) + + val_saver.restore(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, best_epoch)) + # restore model with best performance on validation set + total_test_c_loss = 0. + total_test_accuracy = 0. + # computer test loss and accuracy and save + with tqdm.tqdm(total=total_test_batches) as pbar_test: + for batch_idx, (x_batch, y_batch) in enumerate(test_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_test_c_loss += c_loss_value + total_test_accuracy += acc + iter_out = "test_loss: {}, test_accuracy: {}".format(total_test_c_loss / (batch_idx + 1), + total_test_accuracy / (batch_idx + 1)) + pbar_test.set_description(iter_out) + pbar_test.update(1) + + total_test_c_loss /= total_test_batches + total_test_accuracy /= total_test_batches + + save_statistics(logs_filepath, "result_summary_statistics", + ["test set performance", -1, -1, -1, -1, + total_test_c_loss, total_test_accuracy]) diff --git a/cifar10_network_trainer.py b/cifar10_network_trainer.py new file mode 100644 index 0000000..b6470b1 --- /dev/null +++ b/cifar10_network_trainer.py @@ -0,0 +1,183 @@ +import argparse +import numpy as np +import tensorflow as tf +import tqdm +from data_providers import CIFAR10DataProvider +from network_builder import ClassifierNetworkGraph +from utils.parser_utils import ParserClass +from utils.storage import build_experiment_folder, save_statistics, get_best_validation_model_statistics + +tf.reset_default_graph() # resets any previous graphs to clear memory +parser = argparse.ArgumentParser(description='Welcome to CNN experiments script') # generates an argument parser +parser_extractor = ParserClass(parser=parser) # creates a parser class to process the parsed input + +batch_size, seed, epochs, logs_path, continue_from_epoch, tensorboard_enable, batch_norm, \ +strided_dim_reduction, experiment_prefix, dropout_rate_value = parser_extractor.get_argument_variables() +# returns a list of objects that contain +# our parsed input + +experiment_name = "experiment_{}_batch_size_{}_bn_{}_mp_{}".format(experiment_prefix, + batch_size, batch_norm, + strided_dim_reduction) +# generate experiment name + +rng = np.random.RandomState(seed=seed) # set seed + +train_data = CIFAR10DataProvider(which_set="train", batch_size=batch_size, rng=rng, random_sampling=True) +val_data = CIFAR10DataProvider(which_set="valid", batch_size=batch_size, rng=rng) +test_data = CIFAR10DataProvider(which_set="test", batch_size=batch_size, rng=rng) +# setup our data providers + +print("Running {}".format(experiment_name)) +print("Starting from epoch {}".format(continue_from_epoch)) + +saved_models_filepath, logs_filepath = build_experiment_folder(experiment_name, logs_path) # generate experiment dir + +# Placeholder setup +data_inputs = tf.placeholder(tf.float32, [batch_size, train_data.inputs.shape[1], train_data.inputs.shape[2], + train_data.inputs.shape[3]], 'data-inputs') +data_targets = tf.placeholder(tf.int32, [batch_size], 'data-targets') + +training_phase = tf.placeholder(tf.bool, name='training-flag') +rotate_data = tf.placeholder(tf.bool, name='rotate-flag') +dropout_rate = tf.placeholder(tf.float32, name='dropout-prob') + +classifier_network = ClassifierNetworkGraph(input_x=data_inputs, target_placeholder=data_targets, + dropout_rate=dropout_rate, batch_size=batch_size, + n_classes=train_data.num_classes, is_training=training_phase, + augment_rotate_flag=rotate_data, + strided_dim_reduction=strided_dim_reduction, + use_batch_normalization=batch_norm) # initialize our computational graph + +if continue_from_epoch == -1: # if this is a new experiment and not continuation of a previous one then generate a new + # statistics file + save_statistics(logs_filepath, "result_summary_statistics", ["epoch", "train_c_loss", "train_c_accuracy", + "val_c_loss", "val_c_accuracy", + "test_c_loss", "test_c_accuracy"], create=True) + +start_epoch = continue_from_epoch if continue_from_epoch != -1 else 0 # if new experiment start from 0 otherwise +# continue where left off + +summary_op, losses_ops, c_error_opt_op = classifier_network.init_train() # get graph operations (ops) + +total_train_batches = train_data.num_batches +total_val_batches = val_data.num_batches +total_test_batches = test_data.num_batches + + + +if tensorboard_enable: + print("saved tensorboard file at", logs_filepath) + writer = tf.summary.FileWriter(logs_filepath, graph=tf.get_default_graph()) + +init = tf.global_variables_initializer() # initialization op for the graph + +with tf.Session() as sess: + sess.run(init) # actually running the initialization op + train_saver = tf.train.Saver() # saver object that will save our graph so we can reload it later for continuation of + val_saver = tf.train.Saver() + best_val_accuracy = 0. + best_epoch = 0 + # training or inference + + if continue_from_epoch != -1: + train_saver.restore(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, + continue_from_epoch)) # restore previous graph to continue operations + best_val_accuracy, best_epoch = get_best_validation_model_statistics(logs_filepath, "result_summary_statistics") + print(best_val_accuracy, best_epoch) + + with tqdm.tqdm(total=epochs-start_epoch) as epoch_pbar: + for e in range(start_epoch, epochs): + total_c_loss = 0. + total_accuracy = 0. + with tqdm.tqdm(total=total_train_batches) as pbar_train: + for batch_idx, (x_batch, y_batch) in enumerate(train_data): + iter_id = e * total_train_batches + batch_idx + _, c_loss_value, acc = sess.run( + [c_error_opt_op, losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + # Here we execute the c_error_opt_op which trains the network and also the ops that compute the + # loss and accuracy, we save those in _, c_loss_value and acc respectively. + total_c_loss += c_loss_value # add loss of current iter to sum + total_accuracy += acc # add acc of current iter to sum + + iter_out = "iter_num: {}, train_loss: {}, train_accuracy: {}".format(iter_id, + total_c_loss / (batch_idx + 1), + total_accuracy / ( + batch_idx + 1)) # show + # iter statistics using running averages of previous iter within this epoch + pbar_train.set_description(iter_out) + pbar_train.update(1) + if tensorboard_enable and batch_idx % 25 == 0: # save tensorboard summary every 25 iterations + _summary = sess.run( + summary_op, + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + writer.add_summary(_summary, global_step=iter_id) + + total_c_loss /= total_train_batches # compute mean of los + total_accuracy /= total_train_batches # compute mean of accuracy + + save_path = train_saver.save(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + # save graph and weights + print("Saved current model at", save_path) + + total_val_c_loss = 0. + total_val_accuracy = 0. # run validation stage, note how training_phase placeholder is set to False + # and that we do not run the c_error_opt_op which runs gradient descent, but instead only call the loss ops + # to collect losses on the validation set + with tqdm.tqdm(total=total_val_batches) as pbar_val: + for batch_idx, (x_batch, y_batch) in enumerate(val_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_val_c_loss += c_loss_value + total_val_accuracy += acc + iter_out = "val_loss: {}, val_accuracy: {}".format(total_val_c_loss / (batch_idx + 1), + total_val_accuracy / (batch_idx + 1)) + pbar_val.set_description(iter_out) + pbar_val.update(1) + + total_val_c_loss /= total_val_batches + total_val_accuracy /= total_val_batches + + if best_val_accuracy < total_val_accuracy: # check if val acc better than the previous best and if + # so save current as best and save the model as the best validation model to be used on the test set + # after the final epoch + best_val_accuracy = total_val_accuracy + best_epoch = e + save_path = val_saver.save(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + print("Saved best validation score model at", save_path) + + epoch_pbar.update(1) + # save statistics of this epoch, train and val without test set performance + save_statistics(logs_filepath, "result_summary_statistics", + [e, total_c_loss, total_accuracy, total_val_c_loss, total_val_accuracy, + -1, -1]) + + val_saver.restore(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, best_epoch)) + # restore model with best performance on validation set + total_test_c_loss = 0. + total_test_accuracy = 0. + # computer test loss and accuracy and save + with tqdm.tqdm(total=total_test_batches) as pbar_test: + for batch_idx, (x_batch, y_batch) in enumerate(test_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_test_c_loss += c_loss_value + total_test_accuracy += acc + iter_out = "test_loss: {}, test_accuracy: {}".format(total_test_c_loss / (batch_idx + 1), + total_test_accuracy / (batch_idx + 1)) + pbar_test.set_description(iter_out) + pbar_test.update(1) + + total_test_c_loss /= total_test_batches + total_test_accuracy /= total_test_batches + + save_statistics(logs_filepath, "result_summary_statistics", + ["test set performance", -1, -1, -1, -1, + total_test_c_loss, total_test_accuracy]) diff --git a/data/HadSSP_daily_qc.txt b/data/HadSSP_daily_qc.txt deleted file mode 100644 index d7badf5..0000000 --- a/data/HadSSP_daily_qc.txt +++ /dev/null @@ -1,1023 +0,0 @@ -Daily Southern Scotland precipitation (mm). Values may change after QC. -Alexander & Jones (2001, Atmospheric Science Letters). -Format=Year, Month, 1-31 daily precipitation values. - 1931 1 1.40 2.10 2.50 0.10 0.00 0.00 0.90 6.20 1.90 4.90 7.30 0.80 0.30 2.90 7.50 18.79 1.30 10.29 2.90 0.60 6.70 15.39 11.29 5.00 3.60 1.00 4.20 7.89 1.10 6.50 17.19 - 1931 2 0.90 0.60 0.40 1.10 6.69 3.00 7.59 7.79 7.99 9.59 24.17 1.90 0.20 4.69 10.58 0.80 0.80 0.90 7.59 12.88 4.19 5.89 1.20 8.59 5.69 0.90 1.80 2.20 -99.99 -99.99 -99.99 - 1931 3 0.00 1.30 0.00 0.00 0.00 0.50 0.40 0.60 1.00 0.00 0.10 7.30 6.20 0.20 0.90 0.00 0.00 0.20 5.80 4.60 1.40 0.40 0.40 0.00 0.00 0.00 0.00 0.30 1.80 0.20 0.00 - 1931 4 3.99 3.49 0.00 2.70 0.00 0.00 1.80 1.80 0.00 0.20 3.39 2.40 1.40 1.60 3.59 7.99 2.20 0.20 0.00 0.20 0.30 3.49 5.09 6.79 4.79 3.20 1.90 0.70 0.00 2.10 -99.99 - 1931 5 1.70 0.00 0.70 0.00 5.62 0.70 13.14 0.80 11.13 11.23 0.60 1.70 10.83 8.12 2.21 0.60 0.20 0.70 0.00 0.00 0.00 1.91 2.31 4.31 3.91 0.20 0.00 12.03 1.60 9.23 3.11 - 1931 6 1.40 16.40 3.70 0.10 5.80 12.90 4.30 4.50 10.40 13.20 0.30 0.10 9.30 29.60 23.40 2.30 9.80 8.90 0.40 2.90 6.70 2.40 2.80 0.00 0.40 1.90 2.30 0.30 0.00 0.90 -99.99 - 1931 7 9.49 1.70 8.69 4.10 2.50 13.29 2.70 5.60 3.10 1.30 7.59 3.90 2.30 7.69 1.60 3.60 7.09 1.50 1.10 0.30 2.20 10.69 1.30 3.50 3.70 0.80 13.19 1.60 9.29 1.20 1.80 - 1931 8 0.20 0.00 0.00 0.00 0.00 0.60 2.00 0.60 6.60 0.60 0.90 1.20 0.50 4.80 2.80 6.60 4.10 0.00 17.20 3.50 1.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1931 9 9.86 4.33 1.01 0.10 0.30 1.01 0.80 1.31 0.00 0.30 4.23 0.00 1.01 1.01 0.91 14.69 0.40 0.40 0.10 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 2.62 4.33 -99.99 - 1931 10 23.18 5.30 4.20 6.89 4.10 11.29 10.09 5.80 11.99 1.80 2.00 5.10 0.30 0.00 0.00 0.10 0.10 0.00 0.50 0.00 0.00 0.00 3.20 0.00 0.40 2.40 19.59 1.00 11.09 0.20 4.30 - 1931 11 6.60 20.40 24.80 3.30 3.30 2.60 5.20 4.20 8.00 13.60 3.50 0.90 8.50 15.30 0.10 0.10 13.50 10.20 5.10 6.40 0.10 6.70 28.20 7.30 10.20 7.40 5.70 6.40 1.20 0.60 -99.99 - 1931 12 3.20 21.60 16.00 5.80 8.40 0.70 6.90 4.80 2.80 1.10 1.10 0.90 2.50 3.20 0.00 0.60 0.10 3.50 1.50 0.90 0.50 10.60 16.40 4.60 2.20 1.70 5.70 3.00 0.10 0.00 17.40 - 1932 1 12.71 41.12 22.51 7.20 12.41 5.70 1.70 1.80 24.41 3.80 0.80 13.71 4.30 17.21 20.71 8.50 1.50 1.00 11.20 5.20 6.50 0.40 0.40 4.00 0.10 0.00 0.00 1.00 0.30 0.10 1.50 - 1932 2 0.00 0.22 0.00 0.54 0.33 0.11 0.00 0.00 0.22 0.11 0.22 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.11 0.22 0.11 0.11 0.11 0.00 0.11 0.00 0.00 -99.99 -99.99 - 1932 3 0.10 0.00 0.00 1.60 8.30 4.10 10.00 1.10 0.00 0.00 0.00 0.60 0.50 0.00 0.00 0.00 0.00 0.00 1.90 9.60 12.50 3.40 0.70 2.70 2.40 0.70 5.50 0.50 7.20 4.70 0.90 - 1932 4 7.41 4.61 1.10 0.10 9.41 8.61 2.10 13.62 17.63 4.71 0.70 0.30 10.02 3.61 1.10 0.00 0.00 1.00 6.21 1.90 1.10 11.02 1.70 0.20 0.00 0.00 4.71 10.12 2.90 1.10 -99.99 - 1932 5 0.10 0.20 0.00 0.10 0.70 0.10 0.80 1.00 0.30 0.00 10.51 17.42 4.11 1.00 13.62 0.30 0.10 8.21 4.41 3.70 1.90 0.00 0.90 0.20 3.60 0.70 1.00 1.80 1.00 0.60 0.00 - 1932 6 0.00 0.00 0.00 0.20 0.00 0.00 0.60 0.20 0.50 0.00 0.00 0.10 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.20 1.81 4.02 13.25 1.61 6.63 19.38 -99.99 - 1932 7 2.41 7.62 13.94 7.42 1.30 1.30 1.80 3.81 2.61 4.01 1.00 4.81 9.93 0.00 1.20 0.50 0.40 0.10 2.11 0.80 0.40 1.60 5.01 6.32 3.51 3.01 14.34 0.90 9.52 2.71 1.00 - 1932 8 0.00 1.70 0.30 1.00 2.70 4.61 3.40 2.60 0.50 1.30 9.61 1.80 3.81 0.40 0.70 2.90 0.70 0.00 0.00 2.70 0.90 0.00 0.00 0.00 0.00 3.10 0.40 2.60 3.91 3.91 14.52 - 1932 9 19.37 7.39 9.69 2.70 3.50 3.79 16.68 5.29 4.69 16.88 3.50 1.00 14.08 2.00 0.40 0.10 0.80 0.80 0.20 0.00 0.00 0.90 1.20 8.99 8.69 1.70 0.10 1.20 0.00 8.59 -99.99 - 1932 10 4.40 0.50 0.10 1.80 6.40 8.20 14.69 18.39 4.30 2.80 0.10 16.19 2.20 0.80 2.40 4.80 20.69 0.60 10.29 6.20 9.30 7.50 4.70 1.30 8.80 9.50 1.10 2.70 19.39 5.20 2.40 - 1932 11 11.37 8.08 5.79 0.00 0.00 0.00 0.00 0.20 0.00 0.00 0.10 0.30 0.00 0.10 1.30 0.40 0.10 0.20 2.99 8.48 12.27 18.76 8.58 2.29 13.57 6.68 0.80 1.80 22.85 5.39 -99.99 - 1932 12 20.23 19.93 3.81 2.40 0.00 0.00 0.00 0.10 0.40 0.40 0.10 0.70 2.30 13.22 20.43 44.17 27.24 28.95 22.04 4.91 5.51 8.91 5.61 1.30 0.00 3.10 0.20 3.71 4.91 0.10 5.91 - 1933 1 3.40 28.50 2.80 18.80 5.30 4.50 14.60 8.80 0.60 3.50 0.00 3.10 0.50 19.20 1.10 0.90 0.40 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.30 5.80 36.00 - 1933 2 6.10 2.60 14.80 33.10 8.00 9.00 3.10 4.70 7.00 0.10 0.10 0.90 0.10 0.00 0.20 1.70 0.50 0.00 1.40 1.40 0.20 0.00 0.30 2.30 11.30 10.30 4.90 2.70 -99.99 -99.99 -99.99 - 1933 3 2.59 5.29 3.99 5.99 7.19 7.09 0.30 29.54 5.19 0.00 0.00 0.00 1.10 3.89 5.49 2.49 2.89 3.59 0.10 0.00 1.90 0.00 0.00 0.00 0.00 0.10 0.10 0.00 2.20 3.49 1.80 - 1933 4 0.40 14.98 3.20 0.50 0.00 0.00 0.00 11.98 1.70 0.10 4.69 0.20 0.00 0.40 6.09 1.60 0.80 0.10 0.10 0.20 0.00 0.00 0.10 12.68 0.90 5.09 3.79 0.20 3.70 0.90 -99.99 - 1933 5 0.00 0.00 4.71 9.92 2.21 13.73 3.81 5.71 1.80 0.10 0.80 0.20 0.00 0.40 1.10 3.61 1.10 4.91 1.50 3.91 0.00 10.23 1.30 3.81 0.90 3.51 0.20 0.70 0.00 0.00 0.00 - 1933 6 6.82 7.93 0.00 0.00 0.00 0.00 0.00 1.00 0.10 1.20 0.10 0.10 0.00 0.00 2.11 13.14 14.25 6.12 2.41 0.20 1.61 0.60 1.30 0.90 0.30 0.00 0.00 0.00 0.00 0.40 -99.99 - 1933 7 0.00 0.00 0.00 0.00 0.10 0.00 6.00 1.70 8.40 9.90 8.30 4.00 10.00 0.80 1.90 0.20 1.20 1.10 1.60 1.50 0.00 0.90 0.90 16.60 2.70 0.10 14.10 4.70 3.40 21.30 0.40 - 1933 8 2.09 2.29 0.20 0.00 0.00 0.00 1.89 6.87 0.30 0.20 1.39 0.00 1.59 2.89 7.07 4.18 9.36 3.98 3.98 2.19 3.68 2.79 0.20 3.19 0.60 2.39 17.23 2.19 0.80 0.30 13.94 - 1933 9 0.90 0.70 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.18 2.19 12.36 3.99 1.00 0.10 0.40 1.20 0.20 0.10 0.00 0.10 0.00 0.00 -99.99 - 1933 10 0.00 0.00 0.30 0.20 0.00 0.00 13.80 1.90 13.20 1.00 1.70 2.10 6.80 1.40 18.80 2.50 0.60 0.70 3.60 1.00 1.30 4.00 3.00 0.30 0.20 4.00 1.40 4.30 0.60 3.10 3.50 - 1933 11 5.90 0.10 0.10 0.80 0.60 0.20 0.20 1.50 7.80 0.10 1.50 2.60 8.40 19.10 1.90 0.70 0.70 4.50 12.90 0.80 0.40 0.10 0.50 0.00 0.00 0.20 0.10 0.00 0.00 0.00 -99.99 - 1933 12 3.91 0.10 0.00 0.20 0.00 0.50 0.20 0.00 0.70 0.10 5.31 0.60 0.00 0.00 0.90 0.30 0.20 1.70 0.50 0.20 0.30 0.60 0.00 6.71 6.41 0.30 0.00 0.30 6.71 4.21 7.01 - 1934 1 12.11 2.20 17.41 13.91 2.80 15.91 14.91 3.30 19.91 8.80 9.10 10.31 6.80 3.50 3.70 24.21 7.10 1.10 0.00 2.10 3.10 5.00 1.70 0.00 5.30 6.30 0.00 0.10 0.70 4.10 0.40 - 1934 2 0.20 0.30 0.10 0.00 0.49 1.18 6.31 0.99 1.38 0.59 0.49 0.00 0.00 0.00 0.10 0.00 0.00 0.39 0.59 1.09 1.18 0.30 0.00 5.72 0.39 0.10 0.00 0.20 -99.99 -99.99 -99.99 - 1934 3 11.57 4.99 3.89 5.29 9.78 4.39 3.59 4.09 0.60 2.79 2.99 2.99 0.20 6.39 1.80 7.38 3.59 2.69 0.00 0.10 1.70 0.30 2.79 0.30 3.49 0.70 0.00 0.00 0.20 0.00 0.00 - 1934 4 0.10 0.10 0.00 0.40 0.00 1.40 6.59 0.90 2.20 6.39 12.79 26.47 9.49 3.70 1.10 0.40 4.70 1.60 1.10 8.39 3.10 2.70 7.59 1.30 1.30 1.00 0.30 0.10 0.20 0.10 -99.99 - 1934 5 3.10 0.00 0.00 0.00 6.99 15.08 2.70 4.50 0.20 0.00 4.10 1.60 3.40 1.20 15.48 2.50 2.00 6.49 18.08 6.99 2.20 0.70 0.40 1.60 0.00 0.00 0.50 0.10 0.00 0.00 0.00 - 1934 6 0.00 0.00 0.00 0.00 0.00 0.40 1.00 5.00 0.40 0.00 0.00 0.00 1.10 3.40 0.70 0.90 0.30 10.10 1.20 1.90 21.70 14.90 0.00 0.90 0.10 5.20 3.50 0.60 0.30 0.10 -99.99 - 1934 7 0.10 0.00 0.00 0.00 0.00 0.30 0.00 0.00 0.00 0.00 0.20 9.60 6.50 2.10 4.30 4.00 8.40 3.10 2.20 3.70 8.20 1.60 1.80 1.40 5.20 3.00 3.90 0.90 6.50 2.50 1.80 - 1934 8 10.59 11.79 2.20 4.20 0.20 8.89 0.10 3.60 6.60 3.30 4.00 0.50 0.00 1.20 1.90 0.10 0.00 3.60 3.60 15.69 12.89 2.60 0.70 0.10 0.10 0.70 6.30 17.69 5.80 1.90 2.30 - 1934 9 2.60 8.00 7.30 6.00 0.10 9.30 7.70 4.70 1.70 2.70 0.00 0.00 0.00 0.10 8.20 1.60 3.50 4.80 5.10 1.80 8.50 11.90 2.80 4.50 24.50 10.20 5.20 7.50 1.70 8.50 -99.99 - 1934 10 0.50 0.60 14.09 9.30 4.30 16.09 1.50 10.50 7.30 0.90 3.80 2.20 8.20 6.40 0.30 1.20 0.90 1.10 12.69 5.40 7.90 9.00 5.10 17.49 28.79 20.19 12.99 4.30 18.69 3.80 2.30 - 1934 11 1.60 6.31 13.32 0.40 0.00 0.00 0.60 0.00 3.21 1.70 0.30 0.30 0.30 0.00 0.10 0.30 0.10 1.30 2.91 0.50 3.11 3.11 0.70 0.00 8.62 0.80 0.40 1.70 0.10 2.91 -99.99 - 1934 12 11.69 7.89 12.59 5.39 0.10 1.90 7.59 13.49 13.49 4.10 3.70 5.49 2.90 8.29 0.90 2.20 14.09 5.69 3.60 0.30 0.60 0.20 2.40 0.00 12.99 16.98 12.39 2.60 5.29 13.69 8.69 - 1935 1 10.83 0.40 1.60 0.40 0.00 0.60 0.30 1.80 3.01 3.41 11.03 0.60 5.72 0.10 0.10 0.10 0.00 0.00 0.00 0.00 0.00 0.10 4.51 10.23 3.61 0.10 0.30 1.20 0.60 1.20 12.53 - 1935 2 17.00 4.30 3.10 3.80 7.40 0.20 0.00 0.00 0.30 6.80 9.20 6.70 5.40 2.50 23.60 13.00 4.40 14.10 20.30 6.30 3.20 2.20 1.10 3.20 0.00 3.60 5.60 5.60 -99.99 -99.99 -99.99 - 1935 3 0.10 3.50 4.90 4.80 3.20 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.30 0.20 0.10 0.00 0.90 1.60 0.10 7.80 8.60 2.60 7.80 2.00 1.50 0.20 0.70 6.40 1.60 0.80 - 1935 4 0.10 0.00 1.00 0.10 0.00 0.00 6.40 7.70 17.10 18.40 7.10 0.00 1.70 2.90 6.40 15.60 5.20 0.80 5.50 6.20 1.30 1.70 1.50 0.10 0.00 0.00 0.00 0.00 0.00 0.60 -99.99 - 1935 5 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 3.82 0.90 4.02 7.43 0.20 3.21 1.81 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1935 6 0.00 4.01 7.21 4.41 1.10 7.41 7.91 3.81 0.90 10.52 9.02 4.81 3.71 2.00 0.60 2.00 1.70 0.10 6.61 2.70 2.70 0.10 18.13 0.80 0.00 5.51 1.90 0.00 0.80 1.00 -99.99 - 1935 7 1.10 1.20 6.11 8.31 0.40 0.00 0.00 0.00 1.60 1.90 0.00 0.00 3.01 0.60 0.20 1.90 2.50 3.91 9.52 0.20 0.60 0.00 1.70 0.00 0.20 9.12 4.81 0.40 0.00 0.00 0.00 - 1935 8 0.00 0.00 0.10 1.40 0.00 0.00 1.20 0.40 0.00 8.68 3.99 0.00 0.50 0.50 4.99 5.09 4.39 1.20 0.40 0.00 2.29 0.00 0.40 0.60 9.68 8.78 1.00 8.08 5.89 8.98 0.30 - 1935 9 16.41 5.80 2.20 0.60 0.10 0.00 0.00 0.00 0.00 1.80 0.30 4.80 9.20 9.30 16.21 14.21 11.71 27.61 10.51 1.30 1.20 2.00 0.10 0.10 0.00 16.11 7.50 7.70 13.61 10.51 -99.99 - 1935 10 1.60 28.77 5.09 1.70 0.90 0.90 22.08 6.99 9.79 19.28 3.60 4.50 9.99 4.69 11.89 4.89 10.39 20.88 4.50 1.30 6.79 1.50 12.49 1.80 1.30 13.29 16.68 15.08 14.28 17.08 1.50 - 1935 11 2.80 4.49 8.99 1.50 4.09 2.80 1.50 1.20 3.89 0.50 12.08 3.50 4.19 6.69 10.29 2.70 14.98 0.60 3.30 0.40 0.10 0.50 1.00 1.50 8.29 12.08 11.49 5.59 11.78 12.68 -99.99 - 1935 12 8.40 2.50 2.80 1.70 1.30 0.90 8.90 6.60 0.00 0.00 0.30 1.10 0.70 16.10 6.90 0.00 0.00 0.00 0.00 0.00 1.50 0.10 0.00 6.20 7.00 5.70 2.00 1.40 6.20 1.40 5.40 - 1936 1 14.78 0.20 0.10 5.39 13.78 4.69 0.10 6.09 32.35 5.39 1.40 2.40 0.10 0.00 0.00 0.10 3.79 0.00 1.60 9.79 2.10 4.99 2.30 1.70 10.68 4.49 4.49 1.40 1.10 2.50 4.09 - 1936 2 4.79 0.20 0.10 0.40 0.60 1.80 2.40 0.00 0.00 0.00 0.00 0.00 0.70 2.70 0.30 0.00 6.39 8.89 7.59 2.60 9.49 2.40 5.09 0.20 2.00 8.19 4.69 1.80 0.70 -99.99 -99.99 - 1936 3 0.40 1.00 1.70 10.90 10.30 0.80 9.40 3.30 2.30 0.00 0.00 0.00 0.00 0.50 0.10 1.40 0.40 0.00 2.50 2.50 3.10 2.30 1.90 0.00 0.20 3.70 3.30 3.40 14.70 5.10 3.10 - 1936 4 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.50 0.40 1.29 0.10 0.30 0.10 2.69 0.90 0.00 0.30 0.40 0.00 10.44 8.46 5.17 1.69 2.69 2.19 0.20 0.00 -99.99 - 1936 5 0.00 0.00 0.00 0.00 0.10 1.10 0.00 0.00 0.00 0.00 0.70 1.51 3.61 1.91 6.42 18.97 5.72 0.50 0.00 0.00 0.00 0.60 0.00 1.20 0.00 0.10 0.00 0.30 8.13 1.41 2.21 - 1936 6 1.30 2.21 0.10 0.00 1.30 0.00 0.00 0.20 2.41 0.10 1.71 0.90 0.50 5.72 3.71 11.34 2.31 0.00 1.10 0.10 0.00 3.21 0.80 0.00 0.00 0.00 2.51 0.20 15.85 2.81 -99.99 - 1936 7 13.71 4.70 0.40 3.30 2.50 2.90 0.90 0.00 0.90 2.70 4.40 9.01 1.10 1.70 0.60 0.30 10.21 12.91 2.30 2.80 1.50 4.20 18.31 24.52 9.81 1.20 0.10 2.30 0.70 15.31 1.60 - 1936 8 16.70 4.20 1.00 2.10 1.70 1.60 0.10 8.10 0.40 0.10 10.60 0.40 7.20 5.00 4.60 1.50 7.00 1.60 1.60 0.70 0.40 0.40 7.70 2.00 0.00 0.00 0.00 0.10 0.30 0.70 0.10 - 1936 9 13.79 12.59 4.40 9.99 4.20 17.28 6.99 0.00 4.20 0.40 6.49 4.10 3.20 1.50 0.00 0.00 0.70 0.00 0.00 0.00 0.10 0.00 0.00 17.58 0.70 1.70 0.20 0.00 0.00 0.00 -99.99 - 1936 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.30 0.20 0.10 2.10 10.20 1.10 2.70 17.10 9.20 18.50 1.90 1.10 2.10 5.20 3.40 39.70 8.20 17.10 4.90 2.90 8.50 0.80 0.70 - 1936 11 7.89 4.30 1.90 3.30 7.79 5.80 9.19 11.59 5.10 0.80 3.20 0.90 3.40 14.19 10.49 8.59 2.80 0.10 0.00 0.00 0.10 0.00 0.60 0.30 0.40 0.20 1.40 4.30 4.80 4.90 -99.99 - 1936 12 5.30 3.10 6.10 14.09 6.70 0.20 9.79 0.40 0.20 3.20 8.90 1.50 32.68 2.60 9.40 8.30 9.20 5.70 12.39 11.39 14.09 1.50 0.70 1.90 1.20 0.10 0.00 1.30 3.60 11.59 7.90 - 1937 1 8.30 11.60 7.80 18.30 17.80 8.70 0.60 1.20 8.60 0.70 6.60 20.40 0.40 0.00 10.40 2.60 17.80 0.50 0.50 12.80 7.60 3.90 1.80 8.60 1.40 0.40 0.30 0.20 0.50 3.90 2.40 - 1937 2 1.30 16.72 8.51 5.41 1.80 2.80 7.01 7.11 3.10 3.30 0.20 7.71 4.81 2.90 10.92 4.91 5.51 11.82 9.41 2.80 0.60 0.10 0.00 3.61 8.71 4.11 5.11 0.40 -99.99 -99.99 -99.99 - 1937 3 0.50 0.00 1.50 0.60 0.90 0.50 0.10 0.00 0.10 0.10 2.10 2.60 0.30 0.00 0.00 15.50 6.80 7.40 2.80 1.80 0.30 0.70 2.10 1.00 0.10 0.00 0.00 0.00 0.00 0.50 1.60 - 1937 4 0.00 0.30 1.60 1.90 1.50 2.80 7.90 6.10 11.30 0.40 0.00 0.00 0.20 2.10 11.40 4.20 1.30 0.40 10.40 1.90 4.00 0.70 0.50 0.00 0.00 1.50 0.50 0.10 0.00 0.00 -99.99 - 1937 5 0.00 0.00 0.10 5.29 0.30 2.50 0.20 0.30 0.40 0.40 0.00 0.00 0.00 0.00 0.10 0.10 0.00 0.50 1.80 10.48 11.18 4.29 0.80 5.49 0.60 3.19 0.50 0.40 0.00 1.30 1.50 - 1937 6 0.70 9.09 12.89 16.18 9.29 10.49 0.90 1.60 0.10 0.00 2.20 0.40 1.20 0.20 0.30 0.00 0.00 1.70 0.80 1.10 0.80 0.10 0.00 0.00 0.10 0.50 7.59 7.49 3.50 5.29 -99.99 - 1937 7 4.60 18.61 8.21 12.41 14.31 3.60 1.70 6.71 1.90 0.10 0.00 2.90 9.21 8.91 0.10 0.00 0.00 5.40 0.20 11.21 8.41 1.70 1.90 2.30 0.10 0.00 0.00 0.00 0.00 0.00 0.00 - 1937 8 0.00 0.00 0.00 1.90 0.30 10.40 0.00 1.80 2.40 0.00 0.00 3.90 10.90 10.90 0.90 13.00 0.50 9.10 0.10 0.40 0.00 2.60 0.00 0.50 0.20 0.00 0.00 3.60 9.20 3.30 16.70 - 1937 9 9.02 3.51 3.21 12.83 2.61 4.31 12.83 2.00 0.10 0.50 1.70 3.31 0.20 6.01 5.91 0.20 0.90 0.00 1.00 0.00 0.50 0.00 4.51 1.40 0.00 0.30 1.90 0.00 2.81 16.13 -99.99 - 1937 10 18.03 5.61 0.70 0.00 0.00 0.00 0.40 0.00 0.00 0.00 0.00 0.20 1.50 1.30 0.90 1.80 0.00 0.00 1.40 5.31 9.52 8.42 4.01 2.91 15.93 6.91 4.21 1.40 5.21 7.21 0.20 - 1937 11 0.90 0.40 1.20 1.00 0.00 0.50 1.00 0.00 0.00 0.10 0.50 0.30 0.30 0.40 0.20 0.10 0.00 3.10 1.80 5.30 6.00 0.00 0.10 0.80 0.50 0.00 0.00 2.50 5.20 3.80 -99.99 - 1937 12 0.20 0.20 2.90 7.81 4.10 0.00 1.40 0.40 1.20 10.41 1.30 3.30 3.20 0.80 0.20 0.00 0.00 0.90 0.10 1.90 10.11 14.62 14.12 3.60 0.60 0.90 0.20 0.00 0.20 0.10 0.00 - 1938 1 0.00 0.00 0.40 0.40 1.00 7.20 0.70 3.50 17.09 3.10 7.40 10.99 4.50 17.59 8.10 10.89 1.00 12.89 5.80 9.90 2.00 0.50 4.20 13.59 6.40 7.30 10.79 13.79 3.50 11.39 18.89 - 1938 2 6.51 5.41 3.50 3.30 1.80 0.80 0.50 11.41 6.71 1.10 0.70 0.00 0.00 0.10 0.00 0.20 0.00 0.00 0.10 0.00 0.00 0.10 0.00 3.00 13.22 7.21 7.51 8.51 -99.99 -99.99 -99.99 - 1938 3 1.00 0.20 0.40 0.00 0.90 0.40 0.00 2.60 3.40 0.70 0.00 0.00 0.00 6.40 18.50 3.50 6.40 10.70 11.50 6.80 0.20 2.00 2.90 8.40 1.80 4.90 0.40 7.50 2.20 2.40 3.20 - 1938 4 4.20 7.50 0.10 2.40 0.30 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.20 0.10 0.10 0.20 0.00 0.10 0.40 1.10 0.00 0.00 0.00 -99.99 - 1938 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.60 1.20 0.90 11.68 4.59 14.07 9.98 8.38 1.70 3.89 0.20 0.00 0.00 4.69 3.29 4.39 0.30 8.68 13.18 2.79 6.89 0.60 0.70 6.09 - 1938 6 9.60 3.40 2.20 5.10 5.80 20.40 1.30 4.20 1.10 0.20 2.00 0.00 0.00 0.00 0.00 0.00 0.00 5.10 2.60 2.90 1.40 0.00 13.80 8.10 1.20 31.80 19.70 20.20 3.10 3.10 -99.99 - 1938 7 0.30 0.50 2.40 2.00 4.80 2.30 25.18 3.40 2.00 7.00 1.20 0.10 17.69 0.70 0.10 0.20 0.90 1.30 0.80 0.00 0.00 0.00 0.10 1.80 3.30 4.30 15.59 3.60 33.88 9.39 0.90 - 1938 8 0.00 0.00 0.00 0.00 6.49 0.60 2.50 0.20 0.10 0.00 4.69 0.20 0.10 0.00 9.09 7.09 9.39 14.38 5.59 3.90 0.80 0.00 3.10 0.90 0.10 0.50 1.90 1.30 4.69 0.70 2.20 - 1938 9 0.80 5.21 3.11 0.10 5.81 2.60 0.20 0.00 0.00 0.50 0.70 1.00 6.81 0.20 5.81 29.65 11.12 4.51 6.71 5.91 4.71 1.10 10.42 2.30 0.00 0.00 0.10 2.70 4.61 3.71 -99.99 - 1938 10 0.80 15.81 33.81 16.61 8.30 11.90 9.60 20.71 10.20 8.20 10.10 12.40 10.70 1.40 11.10 10.10 2.10 5.40 1.90 1.60 9.80 4.40 0.70 7.20 2.80 4.20 2.80 0.70 10.30 4.00 16.11 - 1938 11 14.80 4.50 22.70 4.20 0.60 1.80 11.70 16.50 1.00 3.40 5.60 20.40 8.60 0.20 0.60 5.10 2.10 22.40 4.60 1.50 3.80 11.40 10.40 11.50 8.10 2.60 14.70 6.20 13.50 15.50 -99.99 - 1938 12 6.00 9.29 0.70 14.09 4.20 15.09 6.69 2.80 8.89 3.90 8.09 3.80 4.00 0.70 5.90 2.20 0.60 0.00 1.00 0.70 0.20 0.00 0.30 1.80 6.19 0.80 0.30 4.90 5.20 1.80 3.10 - 1939 1 1.30 1.30 0.00 0.00 0.30 9.08 15.37 14.48 3.69 0.50 0.10 1.20 4.69 18.27 11.88 10.38 2.30 2.50 7.49 0.70 1.10 3.59 0.80 4.89 0.70 0.90 0.10 0.20 0.00 0.10 0.00 - 1939 2 0.00 0.40 2.20 3.30 6.39 19.58 5.59 23.37 6.69 1.30 9.29 5.19 0.20 0.80 4.89 6.29 3.20 4.99 2.80 5.59 11.59 3.60 3.50 9.59 7.59 9.29 10.89 2.00 -99.99 -99.99 -99.99 - 1939 3 11.42 9.02 3.21 9.12 5.31 4.81 18.84 5.81 1.20 1.50 5.61 0.70 0.20 1.20 1.30 0.10 2.60 0.80 0.70 1.50 10.12 5.11 1.50 1.10 0.40 1.00 1.00 0.90 0.80 0.00 0.00 - 1939 4 1.40 0.30 0.40 0.00 0.00 0.00 2.00 4.00 0.00 0.00 1.00 3.00 12.70 7.30 10.10 6.60 0.90 0.10 0.00 1.10 7.00 1.20 2.10 2.70 0.70 4.30 0.30 0.30 0.20 0.00 -99.99 - 1939 5 0.00 0.00 0.10 3.01 3.21 0.70 2.91 2.11 0.20 0.00 0.00 0.00 3.91 4.71 0.70 0.00 0.00 0.00 0.30 0.90 6.32 0.10 0.60 0.30 0.00 1.30 0.20 0.00 0.00 0.00 0.00 - 1939 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.49 2.30 2.40 0.30 1.20 15.88 4.79 0.20 5.19 2.60 0.50 0.50 0.00 0.00 0.00 0.10 0.30 0.00 11.98 11.38 1.40 2.30 -99.99 - 1939 7 3.10 3.00 0.80 4.70 13.60 9.50 6.50 3.50 2.40 0.00 2.00 11.40 17.80 10.90 11.70 4.20 0.10 0.80 0.20 0.70 1.40 5.50 2.50 0.00 0.20 0.10 4.90 20.00 1.20 4.10 2.80 - 1939 8 3.80 4.30 0.70 0.00 0.00 1.00 1.30 0.00 11.80 4.90 1.70 0.00 0.40 0.30 0.00 0.00 0.00 0.00 10.20 0.30 1.30 0.00 0.00 0.00 1.50 2.50 4.80 0.70 0.00 0.30 2.40 - 1939 9 3.80 7.51 26.23 0.60 1.50 4.81 10.01 3.10 12.41 11.41 0.50 2.00 0.70 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.90 -99.99 - 1939 10 0.70 0.30 0.00 0.20 4.19 0.50 0.00 4.09 18.84 19.24 4.69 2.49 1.40 0.40 0.00 0.00 0.50 0.00 0.00 0.00 0.80 1.79 1.69 0.10 0.00 0.40 0.80 0.10 1.20 0.70 0.80 - 1939 11 0.10 1.50 3.00 1.40 11.10 6.30 19.89 22.09 5.10 4.40 9.00 4.80 11.00 19.79 5.60 1.60 2.40 2.10 0.70 0.40 2.10 11.69 2.20 6.70 20.29 10.10 7.00 11.59 6.20 18.19 -99.99 - 1939 12 14.08 6.89 15.58 2.20 0.50 0.20 3.90 3.50 13.48 5.79 0.30 0.00 1.70 4.29 0.90 0.40 0.00 0.10 0.30 0.00 0.00 0.30 1.20 0.20 0.70 0.10 0.30 0.70 0.20 0.00 0.10 - 1940 1 0.10 0.00 0.10 0.00 0.00 4.11 6.51 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.50 7.91 6.51 5.21 0.40 0.00 0.00 7.71 11.01 0.60 7.21 6.11 7.41 0.00 0.10 3.00 - 1940 2 2.30 0.00 2.20 4.61 1.30 6.51 2.70 0.50 0.00 0.00 0.20 0.00 0.10 0.90 2.80 0.00 0.00 0.00 2.20 2.90 0.60 4.51 7.21 2.00 0.20 3.91 6.61 4.21 0.00 -99.99 -99.99 - 1940 3 0.00 0.00 0.10 0.10 0.00 0.00 0.90 6.49 8.29 3.00 14.78 12.78 0.40 2.90 1.50 2.70 5.69 4.79 9.58 7.29 5.59 4.59 2.60 2.80 0.10 0.30 0.00 1.30 7.69 6.39 12.88 - 1940 4 3.31 3.01 7.33 1.91 0.70 5.12 4.42 0.50 0.30 0.00 1.20 0.10 0.80 9.23 1.81 0.80 0.80 0.90 2.71 5.02 11.64 2.01 7.63 5.02 0.20 0.00 0.00 0.40 2.71 0.80 -99.99 - 1940 5 0.70 0.00 0.00 0.00 3.88 0.70 2.59 0.50 0.00 0.00 0.00 0.00 0.00 2.19 6.56 3.58 0.80 0.00 0.00 1.19 0.80 0.70 1.29 0.60 3.78 2.79 0.40 0.50 0.40 2.09 1.19 - 1940 6 0.80 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.80 0.70 0.00 0.00 5.20 0.90 0.00 0.00 0.00 0.00 6.40 6.80 0.30 1.40 3.90 1.40 0.40 0.50 2.60 0.10 -99.99 - 1940 7 1.50 7.71 2.50 2.00 5.90 3.10 1.80 3.20 7.21 11.91 18.32 0.00 4.30 9.31 0.40 5.50 11.31 2.10 3.60 3.00 0.10 0.20 1.80 3.80 0.50 3.30 0.10 0.10 3.00 1.60 0.10 - 1940 8 0.00 0.00 0.00 1.60 0.60 0.00 3.31 6.01 10.02 2.50 0.10 0.50 2.10 2.00 0.10 0.10 1.20 0.80 4.81 11.52 0.90 0.20 0.30 0.40 3.31 4.91 0.60 2.20 0.50 0.30 0.80 - 1940 9 0.10 0.00 0.20 2.70 2.00 3.00 1.20 0.50 3.60 1.20 7.41 7.61 5.60 0.90 0.80 32.03 10.91 7.41 12.31 2.50 1.10 12.21 2.90 0.10 0.20 1.70 1.20 0.00 0.00 0.20 -99.99 - 1940 10 0.10 0.30 0.00 12.01 22.21 3.30 7.60 22.41 17.01 6.60 1.20 0.20 0.00 0.40 3.80 4.00 1.10 0.30 3.90 17.81 11.71 0.80 0.30 0.10 0.30 0.00 0.00 0.10 4.90 18.71 3.90 - 1940 11 5.80 21.90 3.80 3.10 13.90 5.30 3.40 14.20 9.30 7.30 19.70 1.00 0.70 0.50 2.90 0.90 4.10 0.30 20.50 14.70 1.40 4.80 4.50 3.10 3.20 6.70 2.00 0.30 0.70 2.20 -99.99 - 1940 12 8.99 3.70 2.40 8.19 23.67 13.48 0.50 7.99 7.79 4.59 0.80 1.10 7.09 14.58 13.78 8.99 11.19 9.19 0.80 0.10 0.00 0.00 0.00 0.20 0.20 0.40 0.00 3.60 9.39 11.19 2.00 - 1941 1 1.11 0.00 0.00 0.00 0.00 0.00 0.10 0.10 0.00 0.00 0.20 0.30 1.72 0.50 0.00 0.00 0.00 0.71 0.71 0.40 4.85 5.15 1.82 2.02 1.01 0.30 2.83 3.74 0.40 0.40 1.92 - 1941 2 1.30 0.40 0.00 3.00 14.01 10.01 4.50 8.51 3.50 2.30 0.70 16.22 3.60 5.91 5.20 1.10 0.30 0.70 1.20 0.10 0.00 0.30 1.10 0.20 0.00 8.11 12.31 2.50 -99.99 -99.99 -99.99 - 1941 3 3.70 3.50 4.01 7.01 5.61 1.30 0.60 0.60 0.10 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.70 0.00 2.10 0.30 0.40 0.30 2.20 11.32 9.81 4.51 0.40 1.40 9.61 5.61 - 1941 4 2.71 1.40 2.01 0.80 0.10 0.00 0.00 0.10 0.20 2.51 1.91 1.30 2.31 2.01 2.01 17.16 9.83 1.40 3.91 3.21 1.10 0.40 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 -99.99 - 1941 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.00 0.10 0.30 2.00 2.70 0.90 0.90 1.60 1.70 0.20 2.60 4.70 19.60 21.70 1.90 0.60 6.70 3.00 0.30 0.60 0.00 0.00 - 1941 6 0.00 0.00 0.00 0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 7.40 6.20 0.90 3.00 4.50 0.40 0.00 0.10 2.10 4.80 0.60 0.60 4.80 0.90 0.40 0.40 0.00 0.00 -99.99 - 1941 7 0.00 10.20 3.50 0.20 2.90 11.40 8.00 0.10 0.00 0.00 1.10 1.30 1.90 4.10 4.00 0.50 9.10 2.20 0.30 9.50 2.70 0.80 0.00 1.50 10.10 0.50 0.20 1.00 2.90 1.90 0.00 - 1941 8 0.00 0.00 1.50 6.01 1.70 0.40 0.10 0.00 3.50 9.61 7.91 8.11 8.31 4.01 22.43 7.01 9.21 3.40 4.31 4.01 0.30 0.40 1.40 0.40 7.71 5.31 14.42 11.82 4.01 0.30 1.00 - 1941 9 4.50 0.80 0.10 0.00 0.00 0.00 0.20 0.70 0.50 1.10 0.60 0.10 0.20 0.50 0.10 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.40 5.00 8.60 6.70 9.50 7.80 4.70 -99.99 - 1941 10 1.50 0.60 0.00 0.00 5.50 2.10 5.70 3.00 26.72 1.00 0.00 0.40 7.81 2.30 19.91 6.50 22.92 7.71 14.31 3.80 1.00 0.00 0.00 0.00 0.10 0.10 0.90 0.10 1.60 1.50 0.00 - 1941 11 0.20 0.30 1.00 0.10 3.69 3.69 1.20 1.00 4.79 12.48 6.79 0.80 12.98 0.10 0.50 4.99 2.90 0.20 0.30 8.09 5.19 3.79 9.38 8.99 2.80 14.28 9.88 0.10 0.50 0.50 -99.99 - 1941 12 0.40 0.10 0.30 0.60 6.18 10.17 0.20 1.40 0.70 7.87 7.57 4.68 9.77 14.05 7.87 2.39 0.40 1.79 0.80 1.59 1.59 1.00 2.29 1.30 0.20 3.09 0.20 0.10 0.00 0.00 0.00 - 1942 1 5.20 16.80 18.70 6.00 0.00 0.10 0.30 0.20 0.10 0.10 1.10 9.00 1.70 0.40 0.10 1.70 4.70 0.40 12.90 1.00 14.30 15.10 15.00 16.70 3.30 1.20 16.20 13.40 1.10 2.60 10.60 - 1942 2 7.94 24.33 3.32 0.20 1.41 0.40 0.30 0.90 0.90 2.31 3.62 0.40 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.10 0.20 0.00 0.40 0.00 0.00 1.01 7.54 1.31 -99.99 -99.99 -99.99 - 1942 3 0.00 0.10 7.00 9.80 3.90 4.80 7.20 3.30 0.00 0.00 0.10 0.40 1.80 4.20 3.70 14.10 2.30 0.50 1.30 2.20 0.00 0.00 0.00 0.30 0.00 1.50 0.30 0.10 2.70 7.50 11.90 - 1942 4 0.40 4.30 10.90 9.80 8.80 10.50 10.90 5.60 8.00 0.20 0.10 1.70 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.20 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1942 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.00 0.00 0.10 0.00 0.00 0.00 7.61 2.00 15.12 2.60 1.50 0.00 1.50 1.20 10.91 6.21 7.01 11.71 12.41 6.01 1.40 0.50 2.60 - 1942 6 11.58 0.20 0.00 0.00 0.20 0.30 1.50 0.20 0.00 0.00 1.50 8.98 3.29 1.10 0.00 0.00 0.20 0.00 0.00 15.77 5.99 0.60 0.00 0.00 0.00 0.10 1.60 0.20 0.00 0.00 -99.99 - 1942 7 0.00 1.20 13.11 0.60 2.50 4.40 3.80 4.80 0.60 3.20 0.20 4.30 1.20 1.60 4.70 2.30 0.60 0.00 0.30 0.20 7.11 11.41 11.91 5.71 4.10 2.50 0.10 8.51 0.20 0.10 0.10 - 1942 8 0.00 1.60 0.00 0.00 0.00 5.11 17.92 10.91 8.31 15.82 4.01 1.50 1.00 2.40 11.42 7.91 4.51 3.10 2.10 5.11 11.02 1.90 0.00 7.71 11.02 0.00 0.00 0.00 2.80 3.91 4.81 - 1942 9 3.60 17.71 7.70 25.21 5.10 2.60 17.41 4.20 0.20 0.10 0.00 0.00 1.20 8.51 4.40 4.40 3.40 1.90 3.50 30.02 2.50 6.70 4.10 2.30 0.40 0.00 5.60 4.20 0.20 1.40 -99.99 - 1942 10 0.40 0.40 13.40 5.20 0.50 0.00 6.00 6.10 22.60 6.80 1.10 6.40 8.60 8.90 13.10 4.40 6.30 2.90 5.30 1.40 5.90 1.40 10.00 19.70 7.80 3.70 2.30 0.40 0.10 0.00 0.00 - 1942 11 0.00 0.60 0.10 0.20 0.90 13.06 3.59 0.40 0.90 1.10 2.69 0.20 0.50 0.50 0.10 0.00 0.00 0.00 1.00 0.90 0.00 0.20 0.30 0.00 0.00 0.10 0.00 0.80 0.40 6.28 -99.99 - 1942 12 0.00 0.10 0.00 22.09 3.20 8.70 7.50 11.59 13.19 26.49 0.60 4.00 1.40 1.70 4.10 11.69 3.90 2.00 8.60 17.89 7.40 1.80 2.20 1.70 1.90 3.70 3.60 3.80 0.30 9.40 11.89 - 1943 1 5.61 1.30 0.00 0.60 1.30 1.10 0.20 0.80 12.72 0.50 7.01 6.21 2.80 7.81 2.90 17.22 2.80 0.10 0.90 13.42 2.00 0.50 0.10 15.52 6.71 4.41 11.51 9.41 7.41 3.10 8.01 - 1943 2 3.80 5.30 3.80 14.40 23.10 3.30 1.60 21.20 2.70 5.00 22.70 5.80 4.10 7.90 2.60 0.90 1.00 0.30 0.80 0.50 0.00 1.10 1.80 5.70 4.30 0.20 0.40 0.20 -99.99 -99.99 -99.99 - 1943 3 0.50 0.10 0.00 0.10 0.00 0.00 0.90 3.40 2.60 3.50 1.60 1.30 0.00 2.10 4.71 0.70 1.50 0.00 0.20 0.00 0.00 0.00 0.00 0.70 3.40 1.90 2.80 1.40 12.82 23.53 3.40 - 1943 4 1.00 0.10 0.00 0.00 7.40 3.00 0.30 0.60 1.50 3.50 3.90 15.40 5.00 3.40 2.60 0.30 1.90 4.50 1.20 0.00 0.70 0.40 5.60 10.80 16.70 4.50 7.40 0.00 4.50 0.20 -99.99 - 1943 5 0.00 1.10 0.00 0.00 0.50 3.10 6.49 9.29 0.70 0.90 4.50 17.28 10.49 0.20 0.00 0.00 0.00 0.00 0.10 0.00 1.20 19.08 2.90 0.00 3.50 8.59 0.00 0.00 9.99 1.00 9.19 - 1943 6 8.71 3.21 0.90 13.42 4.51 1.30 7.11 0.10 1.50 1.70 1.80 5.81 7.71 3.61 2.00 8.11 3.11 13.82 11.22 5.11 2.70 8.31 0.90 3.21 0.30 0.00 0.00 0.00 0.00 0.00 -99.99 - 1943 7 0.00 0.00 1.00 0.80 13.46 6.88 1.70 2.99 0.60 5.98 3.09 14.56 3.29 6.08 0.30 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.20 2.39 1.10 1.10 0.10 1.10 5.68 - 1943 8 16.11 4.00 1.60 12.11 4.30 1.30 21.21 1.70 1.20 6.40 0.80 10.11 1.10 3.20 3.50 0.70 1.80 0.90 13.71 4.00 2.80 4.80 0.80 5.80 7.80 5.70 1.60 12.61 14.61 0.50 15.81 - 1943 9 6.09 1.50 0.10 19.78 3.20 4.10 9.49 0.00 0.00 3.20 1.50 3.70 7.09 0.50 7.99 3.50 0.00 0.70 4.20 1.30 0.30 1.90 4.50 2.20 0.40 0.60 17.58 2.60 0.50 3.10 -99.99 - 1943 10 9.81 8.31 29.23 11.71 29.83 1.50 0.10 5.00 1.20 14.01 0.40 6.21 0.20 0.00 0.00 11.11 19.52 0.50 10.71 15.01 7.71 1.10 0.50 0.10 4.10 25.52 4.80 0.10 0.00 5.50 9.21 - 1943 11 2.00 0.80 0.00 6.41 7.82 0.30 2.30 1.80 2.30 6.21 3.01 3.41 2.71 0.30 0.20 1.00 0.00 0.30 0.90 0.50 1.20 3.31 23.55 2.61 0.90 1.50 6.51 5.51 6.81 0.30 -99.99 - 1943 12 11.51 0.60 0.10 0.10 0.30 1.20 7.41 0.20 0.10 0.00 0.00 0.00 0.10 0.10 0.10 0.30 6.91 9.71 11.81 10.71 8.01 2.20 0.60 1.70 1.60 0.90 1.70 2.60 0.50 0.50 3.40 - 1944 1 1.30 6.80 0.50 0.60 7.00 5.00 9.29 7.70 0.00 0.00 0.70 12.29 9.89 0.20 0.10 3.30 1.40 4.30 2.10 11.79 10.79 9.59 6.40 19.09 1.60 14.89 4.20 2.40 4.30 1.10 6.70 - 1944 2 6.61 12.92 2.50 0.30 0.50 7.51 3.61 0.80 2.40 0.00 0.00 0.40 4.81 0.00 6.01 0.10 0.10 0.00 0.40 0.30 0.70 0.70 0.30 0.10 1.00 2.60 0.30 0.70 1.90 -99.99 -99.99 - 1944 3 6.75 0.60 0.20 0.40 0.10 0.00 0.00 0.10 0.79 0.30 2.08 4.37 0.10 0.00 0.20 0.69 0.40 3.37 1.98 0.60 0.50 0.10 0.00 0.20 0.00 0.00 0.00 0.30 0.10 0.89 0.00 - 1944 4 3.40 12.39 7.69 2.60 2.40 0.00 0.10 0.10 0.90 1.00 0.80 0.50 0.70 0.70 0.40 0.70 1.30 10.09 23.07 6.59 4.40 2.30 6.39 0.80 0.00 0.00 0.00 0.00 0.00 0.60 -99.99 - 1944 5 8.58 7.98 2.00 14.97 0.30 0.00 0.40 0.90 12.87 0.10 0.00 0.00 0.30 0.00 0.80 2.00 1.70 6.79 3.59 0.00 0.00 0.00 0.00 10.88 4.99 7.09 6.09 0.50 0.00 0.00 2.39 - 1944 6 2.70 2.90 15.69 15.49 0.70 0.40 0.90 1.00 1.10 1.80 3.40 4.70 6.59 0.80 5.89 0.20 0.00 0.00 0.00 0.00 0.00 0.20 0.00 1.30 2.50 8.49 19.48 4.20 7.19 5.79 -99.99 - 1944 7 3.90 30.77 2.70 2.00 1.80 3.60 11.39 1.50 4.99 0.70 0.20 5.59 0.70 6.19 2.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 9.29 3.30 2.70 0.90 0.20 3.10 0.10 - 1944 8 0.00 0.00 0.00 1.10 0.00 0.40 0.80 2.30 4.19 6.99 6.19 0.70 0.10 0.30 0.80 3.69 2.89 4.39 0.10 0.00 0.00 0.00 4.89 7.58 0.00 2.00 24.85 2.79 3.19 11.38 3.39 - 1944 9 1.10 1.10 22.10 20.10 2.10 2.10 4.00 2.20 0.00 0.00 0.00 0.00 4.10 13.90 13.00 0.20 0.00 1.30 2.30 0.10 0.10 11.90 10.20 0.20 5.30 6.70 0.70 15.90 5.00 1.50 -99.99 - 1944 10 0.80 4.99 1.40 3.50 2.30 0.40 0.30 0.10 0.00 7.09 20.67 2.10 15.18 5.09 2.40 0.40 14.68 2.40 8.69 14.98 5.29 10.39 1.90 1.20 1.30 10.99 10.29 1.50 0.10 0.70 0.50 - 1944 11 1.20 1.10 4.20 32.57 15.28 15.09 4.40 3.20 1.30 7.89 3.60 3.70 3.50 9.39 2.00 4.10 8.09 5.49 1.30 0.40 14.19 9.89 0.70 0.00 0.30 0.00 21.48 7.69 10.39 11.09 -99.99 - 1944 12 20.60 12.00 9.30 6.00 7.00 11.80 4.40 0.30 0.20 3.60 1.10 0.00 1.90 16.30 13.80 13.90 4.00 0.40 8.30 9.60 2.10 0.20 0.20 1.40 1.60 6.50 0.90 0.00 1.00 0.00 0.30 - 1945 1 2.00 10.88 0.70 0.10 1.20 1.20 0.00 0.20 0.50 0.40 0.50 0.10 0.10 0.10 1.60 4.79 22.06 7.49 1.20 0.10 1.20 2.50 2.70 2.99 0.00 0.00 0.10 1.90 31.35 2.10 14.28 - 1945 2 9.71 1.10 29.33 4.30 5.81 19.62 3.20 6.11 10.11 1.80 2.60 14.82 8.91 4.71 1.90 5.31 3.80 0.60 6.21 0.40 3.20 2.20 1.30 11.51 18.32 5.11 0.80 4.30 -99.99 -99.99 -99.99 - 1945 3 0.00 0.00 1.50 0.60 0.10 0.00 0.00 0.50 0.50 0.00 0.00 0.30 1.30 1.40 1.70 1.20 1.30 6.89 17.28 1.60 0.30 0.00 0.00 0.40 5.79 0.30 8.39 6.39 9.59 9.59 18.68 - 1945 4 7.56 5.97 2.19 6.27 4.08 3.48 0.90 0.00 0.00 4.48 10.55 1.79 1.00 7.76 0.50 0.10 0.70 0.00 0.00 0.30 0.00 0.00 0.00 0.00 0.00 0.30 1.19 1.00 0.40 0.30 -99.99 - 1945 5 1.00 1.60 2.00 0.00 7.99 0.80 2.30 5.10 0.80 1.00 0.70 1.40 17.79 4.10 8.79 27.58 1.20 0.20 0.00 10.99 2.40 9.29 0.00 0.80 0.30 4.00 2.50 2.70 2.50 1.60 6.00 - 1945 6 9.39 2.90 6.49 2.30 16.29 1.90 11.69 1.90 2.90 0.50 8.09 0.60 1.50 4.40 1.90 0.20 1.90 0.40 1.20 3.70 0.40 3.60 2.70 1.50 6.99 2.10 0.00 0.10 8.49 7.19 -99.99 - 1945 7 3.60 0.00 0.20 2.40 0.60 1.30 0.10 0.20 14.20 0.00 0.20 0.10 15.30 0.00 7.40 8.40 0.20 11.40 9.00 5.60 9.40 2.30 0.80 0.00 0.00 0.20 0.30 0.00 0.10 0.00 0.00 - 1945 8 0.00 0.00 0.00 1.40 3.11 8.41 0.80 0.00 0.00 0.00 0.00 0.00 0.00 4.01 6.01 0.20 0.00 0.30 0.00 0.00 5.91 0.60 19.93 3.21 0.20 0.00 0.20 2.91 0.50 0.10 0.00 - 1945 9 0.00 0.00 0.00 1.30 0.00 0.00 0.00 0.00 0.60 5.01 3.30 19.02 10.11 4.91 12.32 24.63 5.71 0.80 8.41 9.91 21.93 17.12 2.40 0.00 1.80 4.71 2.40 0.00 0.60 0.50 -99.99 - 1945 10 0.00 0.10 0.50 0.40 0.00 0.20 1.00 1.10 17.40 9.70 0.40 0.00 0.00 0.10 0.00 0.10 0.10 0.00 0.50 3.10 5.00 7.00 26.80 26.80 10.10 1.40 11.90 13.80 1.10 6.90 11.10 - 1945 11 1.51 0.90 1.61 0.40 0.00 0.00 3.32 0.00 0.00 0.10 1.11 0.00 0.20 0.20 0.70 0.00 0.00 0.20 0.50 0.20 0.80 0.90 1.11 0.30 2.61 0.10 2.21 0.40 1.31 0.50 -99.99 - 1945 12 2.90 8.60 3.20 8.40 0.60 5.40 6.20 6.60 0.50 0.30 0.20 1.90 1.00 2.10 7.90 11.20 11.70 9.70 0.80 0.70 1.90 2.50 2.10 3.30 3.10 14.70 2.90 0.30 0.10 0.70 0.00 - 1946 1 0.00 0.00 20.21 18.61 1.00 2.20 0.80 4.60 17.61 7.50 5.70 2.80 0.00 0.00 0.00 0.00 0.00 2.30 1.10 0.40 2.50 19.01 5.10 10.51 18.51 2.60 6.60 12.51 8.10 5.70 10.51 - 1946 2 4.00 4.10 12.11 18.22 6.41 7.41 4.40 2.80 3.50 2.20 1.80 5.51 0.20 0.80 1.50 0.50 0.90 1.90 5.41 1.10 0.80 12.71 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 -99.99 -99.99 - 1946 3 0.40 0.50 2.80 3.40 0.30 0.00 0.20 5.20 1.00 0.10 0.80 0.50 0.10 0.00 0.00 7.90 17.90 13.50 9.80 8.40 3.80 4.50 0.90 1.00 0.50 0.10 0.00 0.00 0.00 0.00 0.00 - 1946 4 0.00 0.00 0.00 0.70 0.00 7.03 4.72 0.20 0.00 0.00 2.01 2.31 0.20 0.00 0.40 17.06 0.00 0.00 2.81 0.80 0.60 6.02 3.61 4.12 0.90 0.30 0.00 0.00 0.00 0.00 -99.99 - 1946 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.10 0.30 1.69 0.70 0.00 0.20 2.39 2.59 0.00 4.58 0.00 0.00 2.09 1.99 5.48 0.00 2.59 4.78 - 1946 6 3.50 2.80 2.90 18.82 9.01 3.10 0.70 0.10 4.00 1.60 4.50 0.90 0.60 2.00 0.60 4.40 2.90 3.30 2.20 0.40 0.10 0.10 3.00 0.40 5.41 1.50 7.51 4.10 15.61 0.10 -99.99 - 1946 7 0.40 1.70 6.49 14.49 2.90 0.20 0.00 0.20 0.10 0.00 0.00 0.00 5.20 3.10 5.79 5.00 4.90 5.50 2.50 0.30 4.20 5.99 10.59 1.30 2.70 0.80 1.00 4.80 15.29 4.30 1.60 - 1946 8 3.40 3.70 4.90 2.90 3.50 3.50 2.40 4.20 3.70 0.80 0.60 5.19 1.10 5.09 0.30 0.20 0.70 2.90 2.60 0.00 1.60 0.00 4.30 2.60 0.00 0.10 4.40 14.69 7.49 7.49 5.49 - 1946 9 3.20 1.70 4.30 5.60 13.81 4.70 4.50 2.40 0.80 25.51 0.10 23.31 13.51 15.61 1.60 9.61 7.80 4.60 4.10 0.80 16.41 7.00 2.00 0.90 4.20 5.70 2.80 0.00 2.80 1.30 -99.99 - 1946 10 5.53 1.21 2.21 2.41 0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 2.71 2.31 0.30 0.20 1.01 0.10 0.00 0.00 0.30 0.00 0.00 0.00 0.20 - 1946 11 0.40 2.90 6.10 6.00 2.00 1.30 0.30 0.30 0.00 0.30 2.70 7.00 1.80 0.10 0.30 11.01 19.71 7.00 6.60 16.21 13.31 4.30 10.71 6.40 16.11 10.01 6.10 12.11 5.60 7.90 -99.99 - 1946 12 13.71 7.20 1.20 0.00 8.71 0.20 1.10 4.00 0.20 6.30 13.81 1.90 5.70 8.71 0.50 0.00 0.00 0.80 0.30 1.40 11.01 10.81 1.10 14.91 15.51 3.00 2.90 0.20 5.10 8.41 4.50 - 1947 1 11.81 8.41 16.21 9.21 1.00 3.80 5.70 15.91 2.00 5.10 10.51 7.11 2.00 15.61 3.50 10.41 7.51 1.20 0.00 0.00 0.00 0.00 0.00 0.10 0.60 0.10 0.20 0.40 0.20 0.10 0.00 - 1947 2 0.40 0.30 1.59 1.99 0.60 0.10 0.10 0.00 3.69 0.70 0.20 0.20 0.10 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.70 0.10 0.00 0.00 6.28 7.77 0.30 0.60 -99.99 -99.99 -99.99 - 1947 3 0.00 0.00 1.00 0.40 0.20 1.70 0.10 0.10 0.60 0.90 1.50 5.20 6.19 1.20 5.40 7.29 3.40 7.19 0.60 3.30 12.29 3.60 5.99 2.80 7.89 9.39 7.69 4.70 6.49 0.70 0.10 - 1947 4 0.00 0.00 0.00 3.20 23.01 6.30 6.50 2.30 1.00 2.40 0.90 0.20 7.90 0.50 3.10 0.80 0.00 6.30 18.21 10.71 13.41 6.50 19.91 6.60 8.70 14.61 13.21 2.40 10.31 1.10 -99.99 - 1947 5 1.60 0.20 4.20 1.10 8.10 5.00 1.00 5.60 4.00 5.00 1.20 1.40 3.50 10.10 0.40 0.00 9.30 4.30 0.00 0.00 1.60 6.00 0.60 5.20 3.40 1.00 0.00 6.30 0.50 16.50 7.20 - 1947 6 0.50 0.00 8.40 9.40 7.10 1.50 5.20 9.60 1.80 0.90 0.00 1.70 1.20 4.00 2.20 4.00 10.10 1.70 4.90 6.30 2.50 1.10 3.00 7.90 0.00 0.60 0.80 6.20 1.70 0.70 -99.99 - 1947 7 1.20 0.70 4.91 11.23 2.41 3.71 7.52 5.82 1.90 1.50 0.40 0.70 0.10 0.00 9.02 9.63 0.00 10.73 0.00 9.42 8.62 0.00 4.31 0.20 1.50 4.81 3.81 10.43 0.00 0.50 0.00 - 1947 8 0.00 1.70 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1947 9 0.00 0.00 0.30 5.10 0.80 0.20 3.00 17.70 12.60 3.20 10.70 9.90 1.20 9.70 8.10 16.50 0.10 0.90 8.50 12.60 0.00 12.60 0.50 0.50 0.00 0.40 2.20 1.70 1.10 0.40 -99.99 - 1947 10 0.00 0.00 0.00 0.00 0.10 0.30 4.92 1.71 2.11 1.40 11.54 14.14 2.31 3.31 1.91 1.71 1.40 0.30 0.20 0.00 0.00 2.41 3.71 0.20 0.00 0.40 0.30 0.30 1.00 1.91 5.32 - 1947 11 9.61 16.02 9.31 0.40 0.00 2.70 3.10 24.62 12.41 8.51 24.42 2.50 0.40 3.20 1.20 0.50 0.40 0.00 18.42 24.52 20.92 13.31 6.21 2.10 0.30 0.30 0.10 0.60 0.20 0.10 -99.99 - 1947 12 0.00 0.00 0.50 0.30 1.90 0.40 3.60 0.60 0.00 1.30 0.80 0.20 0.50 0.10 0.00 0.00 0.00 0.00 0.00 0.10 0.40 1.70 8.21 7.21 7.61 10.21 9.01 4.71 1.60 1.60 15.52 - 1948 1 17.80 7.80 6.20 9.10 5.00 4.10 4.30 3.00 5.60 22.20 10.10 6.60 15.80 14.00 0.60 6.20 14.40 6.50 4.10 1.80 4.80 0.20 0.00 0.00 1.30 6.10 6.80 4.80 0.30 9.50 6.90 - 1948 2 18.51 4.20 7.31 1.80 15.41 16.61 8.31 20.31 10.71 2.20 18.31 3.40 2.40 3.90 0.30 1.30 0.10 0.00 0.30 1.50 0.30 0.10 0.70 0.20 0.00 0.00 0.00 0.00 0.00 -99.99 -99.99 - 1948 3 0.00 0.40 0.00 0.00 0.20 3.70 19.08 0.20 5.10 0.30 0.00 0.00 0.00 2.20 4.40 6.20 0.60 15.59 7.39 9.29 4.30 0.70 0.20 0.00 0.00 0.00 0.00 0.40 12.09 6.39 26.78 - 1948 4 16.02 2.60 2.70 1.10 4.00 3.50 15.42 1.70 0.10 0.80 0.20 0.90 1.60 0.20 0.00 0.00 4.91 6.21 0.00 0.50 3.60 7.51 1.00 0.10 0.00 0.90 12.71 3.90 2.60 0.20 -99.99 - 1948 5 1.01 1.21 1.81 10.88 0.50 0.50 0.00 0.00 0.00 0.00 2.32 0.20 4.23 0.81 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.42 0.30 0.10 0.30 0.10 0.00 3.02 0.00 7.35 5.74 - 1948 6 5.40 7.70 9.50 7.50 5.90 28.50 3.10 16.10 0.00 0.50 0.00 0.00 0.00 1.50 2.90 5.50 6.00 5.70 5.60 2.80 2.40 1.40 0.00 0.30 2.70 7.40 6.60 1.60 0.20 0.00 -99.99 - 1948 7 0.00 4.50 33.13 7.11 0.00 0.50 0.20 0.00 0.00 3.30 4.20 3.90 2.40 0.00 1.00 0.30 0.50 9.21 8.41 10.51 8.91 4.00 1.50 4.10 1.40 0.30 0.00 0.00 0.00 0.00 7.81 - 1948 8 0.10 1.20 1.70 1.00 7.30 3.30 16.00 2.30 0.10 0.60 0.90 22.00 0.00 10.60 5.50 4.00 3.40 0.50 0.10 0.50 12.20 8.10 7.80 18.00 8.60 1.70 0.00 15.20 5.90 18.00 23.60 - 1948 9 8.20 17.71 4.00 0.70 3.70 2.60 8.50 0.90 4.30 1.50 10.01 5.80 7.50 36.32 3.00 1.50 1.00 6.20 0.10 0.20 3.50 2.50 2.00 9.30 5.40 28.62 0.40 7.30 1.30 3.80 -99.99 - 1948 10 14.00 1.70 2.20 0.00 0.00 0.00 0.00 23.60 25.90 9.20 7.20 1.60 24.80 4.70 3.80 7.50 7.20 1.40 2.00 3.30 3.30 1.30 17.60 11.30 0.70 1.00 0.70 0.90 4.90 3.40 9.10 - 1948 11 2.10 15.09 20.58 6.59 3.90 0.20 0.00 0.00 0.10 0.30 3.90 15.69 2.70 2.30 5.40 5.40 16.29 0.80 5.30 8.09 0.30 0.00 0.00 0.40 1.00 0.10 1.00 0.20 0.20 0.00 -99.99 - 1948 12 6.10 9.50 1.90 0.70 17.61 12.71 9.71 8.90 13.21 7.70 22.31 1.20 6.70 12.11 3.40 0.80 0.00 0.00 0.00 0.20 0.00 0.00 0.00 0.10 0.00 1.10 5.00 26.11 14.11 4.20 6.90 - 1949 1 2.70 0.80 0.70 3.30 21.69 22.69 19.19 0.60 1.00 5.10 5.20 2.10 5.40 4.10 6.70 2.40 6.50 10.99 13.99 8.00 0.80 3.30 4.40 0.20 4.10 3.80 0.50 0.00 0.50 4.00 0.00 - 1949 2 0.00 0.00 0.00 0.00 0.20 1.60 8.49 11.19 6.10 3.10 6.00 3.50 5.50 7.20 2.90 0.80 0.70 7.49 16.29 7.49 9.19 20.59 3.50 1.60 7.99 7.00 1.70 8.09 -99.99 -99.99 -99.99 - 1949 3 0.00 0.00 11.53 7.92 7.92 0.70 2.81 0.30 0.10 0.00 7.92 6.02 2.01 0.40 6.42 2.01 1.10 0.20 1.40 7.42 4.01 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 - 1949 4 1.00 6.31 16.42 3.70 0.30 1.80 6.61 1.90 5.31 24.33 20.43 8.51 1.20 0.10 1.30 0.40 0.10 5.11 1.40 13.52 6.51 14.12 4.91 3.10 0.80 5.01 1.30 1.10 0.30 0.90 -99.99 - 1949 5 0.00 0.00 0.00 6.12 1.30 7.22 2.01 0.10 0.00 0.00 0.00 0.00 0.20 3.11 0.10 0.20 6.92 1.71 0.00 0.00 0.00 3.61 4.11 1.10 0.10 6.12 3.71 3.51 3.11 7.42 2.61 - 1949 6 2.92 1.41 4.23 4.13 4.93 0.30 2.82 0.00 0.00 5.23 0.00 3.42 0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.21 0.30 0.50 0.00 -99.99 - 1949 7 0.00 0.00 0.20 0.20 0.10 0.10 0.00 0.00 0.00 0.10 0.00 0.30 14.57 1.60 0.60 0.00 0.00 0.20 0.20 11.88 0.40 0.20 0.10 0.10 0.80 1.20 1.80 4.79 3.59 12.28 1.50 - 1949 8 13.39 2.70 0.30 7.09 5.50 1.50 47.16 2.80 3.90 5.70 0.10 0.00 3.00 4.10 8.09 0.00 0.00 0.60 0.10 0.00 3.60 0.50 0.10 0.00 0.00 0.30 1.30 1.10 3.50 1.30 8.79 - 1949 9 0.30 7.20 0.70 8.90 2.70 3.80 1.30 12.00 12.50 0.20 0.00 0.00 0.20 0.70 5.80 2.20 0.00 0.00 0.00 0.10 0.20 1.10 0.50 1.20 0.20 0.00 0.00 0.30 0.80 0.90 -99.99 - 1949 10 2.10 4.49 0.00 0.00 0.10 7.49 2.50 0.70 2.70 11.38 2.90 1.00 0.10 1.70 5.59 3.19 23.76 5.79 4.09 5.29 9.38 3.19 19.77 1.70 36.04 0.60 0.20 6.59 1.00 8.29 2.20 - 1949 11 6.11 0.90 7.31 15.93 4.01 10.02 9.32 2.81 15.53 5.21 14.13 16.53 0.70 3.01 4.71 5.31 4.21 0.80 2.60 7.21 4.51 9.92 0.70 0.20 1.40 0.20 0.60 1.30 4.61 5.01 -99.99 - 1949 12 8.30 20.10 8.10 13.60 6.60 18.90 8.90 4.50 2.60 0.00 1.70 5.90 8.10 13.40 4.80 12.90 5.40 10.00 8.20 1.20 10.40 1.90 13.10 13.80 25.90 8.20 4.80 8.20 2.00 0.30 4.30 - 1950 1 7.01 10.51 5.81 1.90 19.62 22.42 2.70 4.30 0.70 1.90 2.50 5.50 0.80 7.01 8.41 0.50 0.00 0.00 1.10 0.00 0.00 0.10 0.00 0.00 0.00 0.20 0.00 0.40 4.80 2.60 0.80 - 1950 2 11.90 6.60 4.50 2.40 0.00 0.10 7.50 6.30 6.70 14.50 7.20 1.50 2.90 19.90 19.20 5.20 5.30 3.70 2.70 1.50 0.70 2.00 4.00 0.80 0.00 0.00 1.60 0.70 -99.99 -99.99 -99.99 - 1950 3 11.69 4.90 1.80 1.60 0.00 0.00 0.00 1.60 0.20 0.00 0.30 0.30 0.00 2.70 5.89 11.09 7.89 18.68 3.50 9.39 1.10 12.19 1.10 0.10 0.00 0.00 0.00 0.00 0.20 0.10 6.59 - 1950 4 14.08 0.60 0.60 5.69 1.00 1.80 19.37 13.68 11.98 7.29 2.60 1.70 2.10 0.00 0.20 3.49 4.09 1.00 0.00 4.09 1.40 2.10 3.79 0.30 1.70 0.10 1.60 0.40 2.00 9.18 -99.99 - 1950 5 13.03 6.81 0.10 0.20 0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.30 3.61 2.91 1.50 1.70 7.31 1.90 0.10 0.00 0.00 2.10 4.11 3.01 1.00 0.50 0.10 - 1950 6 0.40 0.10 0.80 0.10 0.00 0.00 0.20 0.10 1.80 0.00 0.00 0.00 2.60 2.50 2.30 4.51 7.01 0.70 0.00 7.41 0.30 0.10 0.00 3.30 7.01 1.80 4.31 23.03 4.01 2.00 -99.99 - 1950 7 0.30 0.20 0.00 0.00 0.00 0.10 6.70 5.90 2.50 9.80 1.20 2.00 29.30 5.40 9.60 4.00 6.60 9.60 14.70 5.80 1.80 3.00 0.30 1.80 3.40 3.30 2.00 6.10 2.30 10.70 2.20 - 1950 8 1.30 2.10 0.30 4.30 8.51 0.10 0.30 14.41 2.20 8.21 6.20 2.40 1.50 9.71 10.81 4.30 13.61 8.61 2.70 3.40 0.00 6.70 4.70 11.31 9.81 12.51 1.20 3.30 0.40 2.60 0.00 - 1950 9 6.40 1.80 6.00 8.20 1.20 35.19 8.00 0.30 2.40 17.59 19.49 6.80 7.40 3.70 0.90 24.99 26.49 1.70 11.10 7.40 7.20 5.20 13.00 10.40 8.60 4.20 8.70 3.40 2.90 18.29 -99.99 - 1950 10 7.19 2.80 6.69 4.59 9.99 4.59 4.49 12.18 11.38 9.49 3.99 4.59 7.99 4.49 2.30 20.17 1.60 2.50 0.60 0.00 6.99 1.20 0.00 0.00 0.00 6.39 1.00 0.10 1.50 6.69 2.10 - 1950 11 8.90 1.00 0.70 0.10 0.40 0.00 15.60 14.70 2.70 0.40 9.40 8.70 3.50 1.80 3.80 2.10 2.80 8.60 0.90 0.80 4.80 0.50 0.00 0.00 0.00 0.00 2.30 4.20 5.00 12.10 -99.99 - 1950 12 7.67 0.70 1.30 0.20 1.00 5.18 0.70 1.79 14.45 2.29 1.10 0.20 0.30 1.30 0.00 0.20 3.39 2.49 5.28 2.99 1.00 2.39 0.80 0.00 0.00 0.10 0.00 0.10 0.00 0.30 0.10 - 1951 1 2.69 0.20 2.79 2.40 0.30 2.30 5.39 0.50 15.67 13.97 9.28 4.79 8.08 8.38 0.80 26.35 13.67 1.40 2.69 2.99 7.09 0.40 0.00 0.00 5.09 0.50 2.89 0.30 3.19 6.49 3.49 - 1951 2 5.70 15.40 3.30 7.90 10.90 2.20 1.70 5.20 0.10 0.40 1.70 2.50 0.20 0.50 0.00 6.80 5.20 3.00 8.90 2.60 1.50 0.80 1.70 0.10 3.40 0.60 0.20 0.20 -99.99 -99.99 -99.99 - 1951 3 1.50 0.60 0.90 5.50 1.10 3.30 4.30 0.00 1.00 0.20 0.40 0.10 3.70 0.30 0.00 0.20 3.20 4.60 0.00 0.10 25.90 16.10 0.70 0.90 5.30 4.70 0.20 0.10 1.00 9.30 5.60 - 1951 4 2.11 0.50 17.99 4.72 0.40 5.73 3.12 0.10 0.30 0.00 20.00 2.41 3.92 4.62 20.40 2.61 0.20 2.71 0.00 0.00 0.00 0.10 0.20 0.40 0.90 0.40 0.00 0.00 0.90 3.32 -99.99 - 1951 5 18.36 5.19 0.10 0.20 0.20 0.00 1.10 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.90 0.00 0.00 0.00 3.49 7.09 0.30 0.50 0.40 9.48 0.90 0.50 1.30 0.00 0.00 0.00 0.00 - 1951 6 0.00 0.00 0.00 0.00 0.00 0.20 0.00 0.00 0.00 0.10 6.51 6.21 6.41 2.60 2.30 7.61 5.11 0.30 1.60 14.02 0.40 0.00 1.40 0.00 6.31 0.50 0.10 0.00 0.00 0.40 -99.99 - 1951 7 3.10 6.61 0.00 0.40 6.61 7.81 4.10 4.40 11.31 5.41 6.11 9.41 0.00 0.00 0.60 0.10 2.80 1.10 0.20 0.10 0.00 7.71 0.00 0.40 4.20 8.61 1.80 0.40 0.00 0.00 0.90 - 1951 8 0.40 18.61 0.60 0.90 3.40 7.81 6.60 5.80 0.00 0.60 7.51 0.50 0.20 0.00 0.40 10.91 0.90 12.11 1.50 4.80 11.81 1.20 4.20 1.60 12.01 6.60 1.20 12.51 6.70 0.40 1.40 - 1951 9 2.20 0.50 12.00 4.70 0.00 0.00 0.00 0.00 0.00 0.30 8.70 5.20 15.10 10.30 2.80 2.10 1.00 0.50 0.00 0.00 0.10 1.90 10.20 17.80 8.50 5.60 7.10 0.00 0.00 0.00 -99.99 - 1951 10 0.00 0.10 0.10 1.50 0.00 0.00 0.00 0.00 4.20 0.30 0.00 0.00 0.10 0.90 3.80 2.50 0.40 0.20 7.60 7.70 2.00 1.10 0.90 0.00 0.00 0.00 0.00 0.00 0.10 4.30 0.40 - 1951 11 3.60 2.10 10.21 18.51 11.11 7.50 0.00 3.70 4.30 11.61 3.50 0.80 2.90 8.90 8.00 15.51 9.50 6.40 15.51 5.50 2.90 1.20 16.31 4.40 0.80 2.90 11.61 6.00 1.80 5.90 -99.99 - 1951 12 6.01 10.31 13.21 5.10 5.20 4.20 19.82 12.51 2.60 0.10 0.30 0.60 4.30 5.30 4.70 0.40 10.91 2.10 26.42 0.00 15.81 1.00 18.02 5.30 5.30 4.50 16.61 1.80 14.21 5.91 5.20 - 1952 1 16.11 1.50 0.70 6.70 3.80 2.80 3.00 15.81 9.71 5.40 1.00 0.20 21.21 5.10 9.11 12.71 1.10 0.30 0.00 0.00 0.00 4.80 0.00 0.00 1.50 0.30 0.40 11.81 0.50 21.11 5.20 - 1952 2 4.79 5.39 0.50 0.30 7.28 3.09 0.60 0.10 2.20 3.49 0.20 0.30 2.99 0.00 0.40 0.70 0.10 0.00 0.10 4.29 0.60 0.00 0.30 0.00 0.00 0.00 0.00 3.89 4.79 -99.99 -99.99 - 1952 3 10.60 6.40 5.40 3.80 5.20 15.00 11.60 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.10 1.80 12.40 0.20 13.30 1.70 0.70 0.30 0.00 0.10 0.50 0.00 0.10 0.20 0.20 - 1952 4 1.10 0.20 0.00 7.39 8.59 2.30 1.20 0.70 5.49 0.20 1.50 0.00 0.00 0.00 0.00 0.00 0.00 0.40 3.59 14.18 8.69 2.90 1.80 0.00 0.40 0.00 1.50 0.10 0.00 3.29 -99.99 - 1952 5 0.40 0.00 3.12 0.91 0.00 8.36 0.10 12.89 1.21 5.64 11.79 0.30 0.40 0.00 0.00 0.00 0.00 0.20 1.01 0.20 0.00 0.00 0.00 0.00 0.00 0.10 0.91 0.10 0.10 0.00 7.15 - 1952 6 8.50 2.30 0.80 11.90 0.30 0.60 0.30 0.00 0.00 1.30 0.00 0.00 3.40 0.80 1.60 12.00 5.90 3.10 9.50 1.90 10.40 0.00 0.60 7.30 0.30 2.10 0.00 10.00 1.90 0.70 -99.99 - 1952 7 10.50 0.30 0.00 0.00 0.00 4.60 9.60 1.00 0.60 6.50 1.90 4.40 1.90 1.30 4.30 10.60 0.50 4.10 8.60 1.70 2.00 0.10 0.00 0.00 0.00 0.10 0.10 0.20 0.20 0.00 9.80 - 1952 8 2.90 10.58 5.89 16.48 0.10 5.99 25.96 3.99 16.38 1.10 4.89 5.69 1.70 0.70 0.10 4.59 0.30 0.00 0.00 0.20 0.00 0.30 0.10 0.90 0.00 18.17 2.70 1.10 0.10 2.80 3.69 - 1952 9 2.90 15.12 0.30 0.10 0.00 0.40 0.40 4.10 0.10 0.00 0.00 0.00 0.10 0.00 0.00 1.70 0.50 0.10 1.10 5.61 0.10 3.30 16.22 16.12 13.32 1.20 0.80 0.10 0.30 0.30 -99.99 - 1952 10 0.80 0.20 0.70 0.50 3.50 1.00 1.60 3.40 0.60 0.00 0.00 10.59 6.59 0.00 0.00 0.10 0.00 1.30 9.79 0.10 0.30 13.29 14.29 6.40 6.30 8.89 16.39 7.79 6.30 3.10 4.80 - 1952 11 3.20 4.61 2.50 15.92 7.51 7.91 0.10 4.51 0.60 0.50 0.00 0.50 4.51 4.71 0.20 0.30 0.00 0.10 0.10 5.31 2.10 2.80 0.10 0.10 0.00 0.40 0.00 0.00 0.00 0.00 -99.99 - 1952 12 0.30 0.10 0.00 0.00 0.00 1.40 1.40 21.48 3.40 9.89 4.10 1.20 0.20 0.90 4.70 16.69 1.20 8.59 8.59 9.29 7.29 10.19 6.00 5.70 5.10 0.80 1.40 1.20 2.20 2.50 2.80 - 1953 1 0.00 0.00 0.00 4.09 1.80 0.10 0.30 2.80 0.80 1.90 1.00 3.70 0.20 4.69 0.60 0.70 0.80 0.00 0.00 0.00 0.00 1.80 3.00 1.70 0.70 15.88 6.69 1.80 1.90 17.68 0.60 - 1953 2 0.00 0.00 0.00 0.30 0.20 0.20 0.40 8.33 1.71 0.70 0.30 0.40 6.52 0.10 0.30 4.52 0.90 1.71 5.12 4.72 1.91 3.71 2.71 10.14 1.51 1.51 0.00 0.00 -99.99 -99.99 -99.99 - 1953 3 0.00 0.00 0.00 0.00 0.00 0.90 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 7.14 0.20 3.72 6.94 5.33 6.84 7.74 - 1953 4 7.90 2.30 1.50 4.00 2.40 0.90 1.40 1.80 0.00 2.00 10.90 2.60 1.70 1.00 6.10 2.90 0.20 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.10 0.60 7.30 6.30 1.70 1.80 -99.99 - 1953 5 0.20 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.01 4.31 8.81 1.80 2.80 14.22 4.21 1.60 5.21 1.30 4.01 12.02 4.81 0.40 1.80 1.30 0.10 0.00 - 1953 6 0.80 0.10 2.10 0.10 0.00 0.00 0.00 0.00 1.70 0.10 0.00 0.10 0.50 10.20 4.30 2.60 1.70 3.30 0.80 4.40 3.00 0.00 0.00 0.20 3.90 19.80 0.00 0.00 0.10 0.00 -99.99 - 1953 7 0.10 2.80 1.30 0.80 14.62 6.41 5.31 4.81 2.30 1.00 15.42 2.40 11.91 3.10 0.10 11.11 7.91 2.80 3.90 12.01 3.90 7.91 3.10 15.62 4.71 11.11 3.00 2.50 1.40 0.10 0.10 - 1953 8 0.30 0.20 2.90 1.80 0.20 0.00 0.00 0.00 0.40 0.00 0.20 3.40 0.00 9.31 1.30 8.01 11.81 1.70 1.10 8.21 5.51 0.40 3.50 5.61 2.20 0.50 7.81 1.40 5.81 12.61 13.41 - 1953 9 13.00 11.00 0.40 0.30 0.40 0.00 0.00 0.50 0.40 0.40 0.30 0.00 0.00 1.40 7.00 0.80 15.10 1.20 14.00 10.90 13.70 7.50 0.10 0.30 2.70 5.00 4.70 4.80 13.40 20.80 -99.99 - 1953 10 7.89 2.60 0.00 0.00 0.00 0.00 0.00 0.30 0.00 0.50 0.40 2.30 1.20 0.10 0.80 5.49 0.20 0.00 0.00 5.59 0.30 2.20 6.89 10.49 0.30 14.49 4.50 0.10 12.09 3.60 19.18 - 1953 11 11.40 6.20 10.90 1.70 3.60 19.90 8.80 7.60 2.50 10.10 17.80 17.80 4.10 32.60 4.10 0.20 0.00 2.60 4.10 0.00 0.00 0.00 6.50 5.60 7.80 11.30 8.10 3.80 3.40 2.00 -99.99 - 1953 12 6.00 19.70 29.20 0.00 0.20 0.00 0.00 1.30 8.50 1.10 1.20 4.90 5.20 2.50 0.10 0.00 0.30 6.70 0.80 9.50 5.70 3.00 11.30 9.70 1.50 8.10 2.10 0.20 1.40 0.50 0.40 - 1954 1 0.40 0.30 0.00 0.00 2.30 0.30 0.80 2.20 0.20 0.00 0.90 11.09 6.39 8.49 17.08 7.99 2.40 25.37 6.69 20.67 2.10 9.59 1.90 5.29 16.58 0.40 0.00 0.50 0.70 0.50 0.20 - 1954 2 0.80 0.10 0.10 1.10 0.50 13.88 0.70 0.10 2.80 7.29 1.30 13.28 8.98 0.90 0.30 7.29 0.40 1.70 3.69 6.09 2.90 17.67 5.49 14.08 5.39 0.60 0.00 0.00 -99.99 -99.99 -99.99 - 1954 3 0.00 8.92 5.51 0.20 3.81 18.54 9.32 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.90 1.90 4.11 9.42 7.92 2.00 1.50 1.60 0.10 4.71 0.70 17.43 3.61 0.70 - 1954 4 4.39 16.67 9.18 3.39 0.50 0.00 2.79 0.10 0.00 0.40 1.40 1.40 3.29 0.50 0.00 0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.50 6.59 -99.99 - 1954 5 1.00 9.13 3.41 2.51 20.86 1.50 2.01 0.10 0.00 0.10 1.20 0.40 10.03 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.01 3.71 15.14 2.11 3.11 9.33 16.35 0.60 0.00 0.00 - 1954 6 0.00 0.00 0.00 0.00 1.20 4.10 0.70 5.60 1.40 0.90 0.10 0.00 0.10 13.10 12.10 4.00 3.90 4.30 8.30 10.10 3.50 9.30 4.50 6.10 9.20 1.40 0.50 0.10 3.40 1.50 -99.99 - 1954 7 0.70 1.90 5.99 3.10 0.30 1.00 1.00 0.50 4.40 0.50 0.00 0.00 4.00 2.10 0.10 6.79 3.10 0.40 10.49 1.00 1.70 4.30 21.48 0.50 0.00 17.58 14.79 3.10 0.60 0.60 0.60 - 1954 8 7.41 0.60 0.20 7.81 12.91 5.20 0.50 2.40 5.30 1.40 0.00 8.11 3.50 3.40 3.80 0.00 17.61 0.90 0.00 0.10 6.20 2.00 1.20 0.00 0.00 1.20 1.10 13.81 3.20 14.21 5.50 - 1954 9 3.40 2.10 5.30 0.00 4.20 3.00 5.70 8.40 14.51 6.60 3.80 1.90 4.80 13.71 19.71 11.61 4.80 0.50 17.01 7.50 1.90 0.00 17.61 10.41 4.60 1.30 0.70 3.70 12.81 7.70 -99.99 - 1954 10 3.50 0.40 10.81 14.31 1.60 0.00 11.11 0.30 4.70 5.00 1.60 3.80 9.11 2.40 21.62 17.31 33.23 26.62 2.70 1.40 4.00 9.21 10.01 1.70 1.40 15.51 8.91 19.12 7.41 0.40 0.10 - 1954 11 0.00 4.00 4.40 2.00 0.30 0.00 10.40 12.20 12.30 21.90 9.10 11.10 4.70 0.10 2.20 0.90 2.40 3.00 1.40 5.30 7.20 18.70 11.40 19.30 4.40 13.10 24.00 3.70 9.60 8.60 -99.99 - 1954 12 18.41 8.60 8.50 6.40 1.00 0.50 1.70 6.80 9.00 2.00 7.60 2.70 2.50 15.41 1.80 7.20 6.00 8.40 8.80 3.30 6.30 7.70 0.20 5.40 11.81 8.20 4.90 7.70 1.50 0.00 0.00 - 1955 1 0.00 0.00 0.30 0.50 0.10 0.00 0.00 0.10 27.21 3.29 2.29 1.00 0.20 1.00 3.79 1.30 2.19 2.59 0.00 2.99 7.08 0.00 0.70 3.39 6.18 0.60 10.37 8.67 5.48 2.99 1.00 - 1955 2 9.20 5.00 0.10 0.00 0.20 7.30 10.60 1.30 0.00 0.40 1.80 0.90 1.80 1.80 0.30 0.90 3.40 0.90 0.50 0.10 0.30 0.30 0.90 0.50 0.00 0.00 2.30 21.20 -99.99 -99.99 -99.99 - 1955 3 9.70 0.00 1.10 0.10 0.20 0.20 0.60 0.30 0.00 0.00 0.00 0.00 0.00 0.20 0.10 0.10 0.30 0.60 0.00 17.60 3.90 2.50 9.40 7.60 0.20 0.00 0.00 0.00 0.00 0.00 0.00 - 1955 4 0.00 6.69 2.50 0.20 7.29 2.00 10.69 4.99 9.89 0.40 1.10 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.30 3.20 0.20 0.00 6.49 8.79 15.58 0.10 1.70 3.00 -99.99 - 1955 5 6.70 0.10 17.60 19.00 3.40 1.10 7.30 8.00 4.30 1.10 3.30 7.40 1.30 1.70 2.40 0.20 1.30 0.70 0.00 0.90 0.60 7.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1955 6 0.00 3.60 1.30 0.50 0.80 0.00 2.20 0.00 0.00 0.40 3.10 0.10 16.80 1.50 0.40 0.10 0.00 0.00 0.80 0.30 1.60 0.70 7.30 0.70 1.50 2.40 8.40 16.90 0.30 6.20 -99.99 - 1955 7 10.63 10.43 14.44 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.40 0.10 0.00 0.00 0.10 0.10 0.00 0.00 0.00 0.20 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 - 1955 8 0.00 0.10 0.00 0.00 0.30 0.10 0.00 6.92 0.80 0.00 0.00 0.10 0.00 0.00 0.90 7.32 3.11 8.32 2.01 0.40 2.61 0.10 0.00 0.00 0.00 0.00 0.50 0.40 2.01 0.60 1.40 - 1955 9 15.21 2.50 4.20 12.61 0.70 0.00 0.00 8.41 5.30 13.01 1.50 9.71 2.00 1.90 0.20 2.70 9.21 0.30 0.00 6.30 3.60 2.90 0.30 10.11 8.11 2.20 0.70 1.00 3.20 0.60 -99.99 - 1955 10 2.90 6.39 1.40 2.90 12.99 0.20 7.49 2.60 0.20 0.00 1.20 0.00 6.49 5.10 0.90 2.80 0.40 20.58 7.99 0.90 0.10 0.20 0.30 0.40 15.19 0.20 0.90 0.30 1.00 2.50 3.20 - 1955 11 0.70 5.00 7.00 0.10 2.20 6.30 17.20 3.30 1.20 2.40 7.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.90 0.10 0.40 0.00 1.50 0.70 4.50 1.80 0.70 1.20 -99.99 - 1955 12 1.40 11.61 2.40 0.80 3.80 14.51 0.30 6.40 25.61 1.60 0.00 0.20 7.10 18.71 4.30 1.10 0.10 0.00 0.00 0.20 3.60 13.71 15.01 1.30 16.11 7.80 27.91 9.20 1.80 2.10 8.00 - 1956 1 1.00 0.40 0.20 0.30 0.70 1.20 2.00 0.00 0.60 6.39 1.10 1.60 0.00 3.89 5.19 2.40 4.39 1.40 9.68 6.69 7.29 1.10 1.10 0.30 1.30 12.28 7.58 13.87 2.20 2.89 0.20 - 1956 2 0.40 0.00 6.21 7.72 0.40 0.40 0.10 0.60 0.10 0.40 0.40 0.50 0.50 0.00 1.60 0.10 0.10 0.20 1.80 0.80 1.20 0.10 0.00 0.00 0.00 0.00 6.82 4.91 6.52 -99.99 -99.99 - 1956 3 25.16 4.49 7.79 2.20 7.69 3.89 0.20 2.70 0.00 0.00 0.00 1.40 0.00 0.00 0.20 0.10 0.50 0.00 0.00 3.00 0.00 5.19 0.00 5.09 0.20 0.40 0.30 0.00 0.00 0.00 0.00 - 1956 4 0.00 1.00 0.50 3.09 0.00 0.40 2.79 7.38 5.49 1.60 0.00 0.00 0.00 2.29 1.00 0.50 0.00 0.00 0.00 0.00 1.00 0.40 0.60 1.80 2.59 1.10 0.00 0.30 0.20 4.29 -99.99 - 1956 5 1.50 2.40 1.40 0.50 0.50 7.29 8.39 13.68 15.38 2.80 2.50 0.00 2.90 0.50 2.40 1.10 0.80 0.10 0.00 4.99 0.10 0.00 7.89 0.10 0.30 0.20 0.00 0.00 1.10 0.10 0.90 - 1956 6 1.00 1.20 12.41 13.12 8.91 9.91 1.20 0.00 0.00 0.00 0.80 5.51 1.20 3.50 0.10 2.90 5.81 3.80 0.50 0.20 0.20 0.10 0.30 0.00 0.60 0.00 3.90 1.10 2.80 4.00 -99.99 - 1956 7 7.80 3.80 4.40 19.90 0.80 5.20 5.70 1.90 0.00 0.60 0.00 0.00 8.30 3.60 0.00 0.80 5.00 3.40 0.00 0.00 0.10 0.30 18.60 1.10 2.50 0.20 0.80 12.90 23.10 2.00 1.50 - 1956 8 11.79 5.30 3.20 1.00 0.90 2.80 3.50 0.00 0.00 17.69 3.70 27.78 7.40 0.50 5.00 18.09 9.79 4.60 1.70 1.60 0.10 0.70 7.80 9.69 2.00 0.30 11.39 7.30 0.70 0.20 0.30 - 1956 9 0.00 5.80 14.30 0.60 11.90 3.50 1.00 0.00 0.00 7.60 1.00 9.90 0.70 0.00 2.40 0.00 0.90 0.40 0.30 1.20 1.70 15.20 7.00 0.00 0.60 2.80 22.40 6.20 5.60 3.00 -99.99 - 1956 10 1.90 5.69 9.79 3.50 0.10 0.80 1.30 0.40 0.00 0.00 0.90 0.40 0.00 0.00 0.80 17.78 1.30 0.60 17.58 0.00 0.70 13.49 8.49 10.69 2.70 0.20 6.49 1.10 0.10 0.20 0.00 - 1956 11 0.00 0.00 0.10 0.20 0.10 0.00 1.60 5.71 4.71 2.20 0.40 0.60 3.51 0.80 0.10 1.10 0.00 0.30 0.10 0.00 1.80 3.01 0.00 8.82 3.91 8.31 7.61 0.60 0.20 2.70 -99.99 - 1956 12 3.50 2.60 3.20 24.30 4.10 1.60 0.80 3.60 5.50 24.80 13.20 10.30 16.80 9.80 10.30 1.70 1.40 0.10 3.00 0.70 2.10 4.60 2.60 1.50 2.00 1.70 8.40 5.60 2.80 9.00 1.10 - 1957 1 1.20 8.70 15.00 13.50 11.20 1.20 0.70 1.30 3.30 0.10 4.60 0.10 0.10 0.30 0.00 0.10 0.00 0.00 4.90 13.80 15.00 17.60 10.10 0.70 28.60 6.40 3.80 11.10 2.80 6.90 6.70 - 1957 2 3.60 0.30 6.81 8.11 7.51 3.50 11.01 4.60 0.20 5.61 2.50 2.40 2.80 0.00 4.60 0.50 0.00 0.00 0.00 1.30 0.00 0.00 28.83 3.10 0.50 0.00 0.00 0.00 -99.99 -99.99 -99.99 - 1957 3 6.99 1.10 1.30 2.60 1.70 3.10 2.30 6.69 3.40 2.90 0.70 0.00 3.80 3.50 18.59 6.20 3.10 8.49 17.29 8.19 1.00 1.10 4.70 1.60 6.39 1.10 0.10 0.10 9.29 0.20 0.10 - 1957 4 0.80 2.50 10.40 2.40 0.00 0.00 0.00 0.00 0.00 0.10 0.20 0.70 0.70 0.90 2.70 10.40 6.20 2.50 1.80 11.50 7.90 0.00 0.20 0.00 0.00 0.20 0.20 2.10 0.00 0.30 -99.99 - 1957 5 0.10 0.00 0.00 0.30 0.00 0.50 4.50 9.60 0.20 1.90 3.30 5.40 2.20 4.00 8.50 5.40 2.50 5.00 5.10 1.00 0.10 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1957 6 1.71 1.21 3.32 3.52 1.51 0.60 0.60 0.00 1.51 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.30 0.10 0.20 0.10 14.48 16.18 11.06 0.80 0.00 -99.99 - 1957 7 0.00 0.30 0.10 0.00 0.00 19.49 6.60 0.30 0.00 1.80 13.79 6.99 3.10 0.70 0.20 3.80 10.79 5.90 7.99 6.40 0.00 0.60 1.60 10.39 19.69 11.39 0.30 0.10 0.00 0.80 0.50 - 1957 8 0.00 0.00 0.00 2.30 1.20 0.00 0.50 5.70 3.50 10.71 3.10 0.90 11.11 10.01 1.60 1.10 0.80 6.50 2.80 0.90 1.70 1.10 16.71 37.03 4.40 2.80 1.00 0.10 0.20 2.10 3.20 - 1957 9 0.00 0.20 4.70 2.90 3.10 12.20 6.00 5.90 6.80 2.70 9.50 5.10 0.30 0.70 1.20 7.60 6.80 3.20 0.00 1.50 5.90 8.90 0.20 0.00 0.00 0.00 1.20 0.30 0.20 0.00 -99.99 - 1957 10 0.30 1.20 0.00 6.51 0.00 0.10 0.30 0.40 0.10 0.50 0.20 3.60 1.70 0.40 14.32 0.30 9.81 15.62 6.61 12.52 6.41 8.31 1.20 11.32 18.93 0.20 9.91 2.60 0.50 7.91 8.21 - 1957 11 8.00 7.20 5.10 6.30 2.70 0.50 0.40 1.00 0.00 1.10 1.90 0.30 0.00 0.10 0.00 0.00 3.20 0.80 2.00 2.00 7.60 0.10 0.20 0.90 0.10 2.00 3.00 1.60 0.00 0.00 -99.99 - 1957 12 0.00 0.00 1.50 0.40 6.60 16.11 21.11 2.50 1.30 23.02 3.90 0.80 0.00 0.20 1.70 6.10 3.50 2.90 12.01 9.51 17.81 7.40 0.10 1.00 1.10 1.50 1.50 2.10 1.50 1.70 0.00 - 1958 1 0.00 0.90 1.70 12.31 7.10 11.81 3.00 29.02 10.91 11.31 1.40 0.40 2.00 1.70 1.20 1.60 5.00 3.50 3.10 3.70 0.90 0.00 0.10 8.11 21.71 2.80 3.70 5.30 0.20 0.10 1.20 - 1958 2 3.20 0.10 3.90 14.59 0.10 0.00 3.40 6.79 3.20 8.49 6.79 0.30 7.19 4.30 2.30 0.40 0.10 0.40 1.20 5.79 2.30 14.09 5.29 2.50 0.10 1.80 1.00 3.90 -99.99 -99.99 -99.99 - 1958 3 0.10 0.00 4.41 4.11 1.60 0.40 0.30 0.40 0.10 0.10 0.20 3.31 2.21 0.00 0.00 0.10 0.00 0.00 0.20 0.10 0.00 0.00 0.00 0.80 0.30 3.11 2.11 2.41 8.52 3.21 0.10 - 1958 4 0.00 0.40 2.80 2.70 0.00 0.00 0.00 0.00 0.50 0.00 0.00 0.00 0.00 0.00 0.10 5.80 4.10 4.40 2.30 0.50 0.00 4.00 5.70 2.70 8.20 3.00 1.90 0.20 0.00 0.00 -99.99 - 1958 5 0.00 0.00 0.00 0.00 7.42 0.20 9.83 5.12 0.00 0.00 0.90 0.50 0.30 1.20 1.30 0.20 3.31 14.25 10.23 7.12 1.10 3.31 8.33 6.42 0.60 1.81 0.90 0.20 2.21 4.92 1.20 - 1958 6 2.00 7.71 0.40 0.00 0.00 0.00 2.60 0.10 5.61 5.21 12.11 0.00 0.00 0.00 3.10 1.00 0.10 6.01 5.61 0.20 0.50 0.40 1.80 4.10 11.61 2.40 8.21 0.00 1.20 9.01 -99.99 - 1958 7 1.70 0.00 0.10 0.80 0.00 0.10 0.50 1.00 1.20 0.20 7.69 13.98 8.49 1.70 6.99 4.19 1.30 1.60 10.39 0.10 5.69 0.20 0.10 0.00 6.69 9.29 12.68 30.86 8.69 6.39 3.20 - 1958 8 9.51 2.60 8.31 8.01 0.70 1.30 0.30 0.10 7.61 18.12 2.30 1.00 10.81 3.81 12.02 0.00 1.50 10.91 2.30 7.21 7.81 11.11 0.30 0.00 0.90 9.01 6.81 0.00 0.20 3.10 0.70 - 1958 9 0.40 0.00 0.00 0.60 0.90 21.18 5.79 1.10 0.00 0.00 0.00 0.00 0.50 0.60 0.10 1.70 0.00 10.29 1.70 5.10 2.80 0.60 19.58 13.99 0.00 0.00 0.80 15.09 4.90 3.20 -99.99 - 1958 10 1.90 3.40 15.59 14.69 2.10 15.19 3.70 3.20 6.89 12.59 1.90 13.79 0.50 4.10 5.70 1.30 0.40 1.50 2.30 0.90 0.50 0.50 0.30 0.00 0.00 0.00 0.00 0.00 3.70 3.70 0.90 - 1958 11 19.40 0.90 0.40 8.85 1.71 0.60 4.62 1.21 1.31 0.30 6.03 6.23 0.80 1.21 1.71 0.40 0.00 0.00 1.01 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 -99.99 - 1958 12 0.80 0.10 0.30 0.10 0.00 0.20 7.51 5.01 0.40 8.91 8.31 19.92 0.30 5.31 1.20 0.40 4.11 9.81 9.91 4.91 6.61 3.30 0.10 1.90 9.61 7.51 7.51 9.51 11.21 5.61 10.11 - 1959 1 16.73 0.10 0.40 0.30 0.00 0.00 0.60 1.00 0.20 0.60 0.50 0.00 0.00 0.00 0.00 2.20 3.21 16.13 9.02 0.80 2.10 1.10 0.10 0.10 0.00 0.00 0.00 0.30 1.20 0.30 0.00 - 1959 2 0.00 0.00 0.00 0.10 0.10 0.00 0.10 0.00 1.60 0.10 0.10 0.00 4.39 6.99 1.90 0.10 0.00 0.10 1.40 2.50 3.39 1.40 5.89 5.79 2.50 4.79 8.58 1.60 -99.99 -99.99 -99.99 - 1959 3 0.30 5.80 1.50 2.50 7.10 1.30 0.00 0.00 0.10 3.20 8.20 0.60 6.70 14.70 0.10 0.00 0.00 0.00 0.00 0.00 0.00 5.10 0.30 4.50 2.50 4.40 3.00 0.90 2.90 1.40 7.40 - 1959 4 4.70 2.60 0.40 1.20 4.90 2.10 8.01 1.30 1.10 3.10 7.91 3.60 6.31 3.00 1.80 0.40 2.40 2.10 1.40 0.20 0.00 0.00 0.00 5.00 20.02 2.70 7.31 1.60 0.10 6.01 -99.99 - 1959 5 3.28 3.28 0.20 0.10 0.40 0.00 6.55 0.00 1.39 4.17 8.04 5.46 0.10 0.00 0.10 0.20 1.09 2.18 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 5.16 - 1959 6 1.90 0.00 2.79 4.09 2.89 6.39 15.96 10.87 1.80 0.30 0.50 0.10 0.00 0.00 0.00 0.50 1.40 0.20 0.00 0.00 1.80 1.50 0.20 3.59 8.18 6.78 3.39 2.29 3.69 4.39 -99.99 - 1959 7 8.11 18.83 4.81 0.00 1.30 3.00 0.50 0.00 0.00 0.00 26.34 6.71 0.10 0.00 1.70 13.82 6.81 4.01 3.30 0.20 0.00 0.00 0.00 0.00 0.60 22.33 18.73 8.01 0.20 0.00 0.20 - 1959 8 0.70 0.70 0.10 0.20 0.40 1.11 0.30 0.00 1.41 0.00 0.00 0.00 7.04 2.01 2.92 0.10 2.01 0.00 0.00 0.50 0.91 2.41 6.94 3.22 0.50 0.00 0.70 0.00 0.00 0.00 0.00 - 1959 9 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.10 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.71 5.87 1.09 6.37 2.79 4.48 0.10 0.40 0.00 0.00 0.10 -99.99 - 1959 10 0.00 0.10 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 1.90 5.80 0.10 0.00 0.00 7.50 33.52 14.91 5.20 18.41 6.30 1.50 6.00 9.21 11.51 33.92 1.80 0.70 2.60 0.10 0.90 - 1959 11 2.00 4.20 0.70 0.70 2.20 4.40 0.10 18.81 9.81 6.30 0.60 2.80 10.01 6.90 1.20 4.50 2.90 4.60 14.21 6.90 2.60 21.71 9.91 4.90 6.10 2.80 3.50 12.81 4.30 2.70 -99.99 - 1959 12 6.11 6.11 0.90 0.10 7.91 7.01 4.41 9.12 3.31 0.30 2.60 2.81 14.33 0.50 1.10 13.93 8.72 4.21 7.21 10.12 6.91 13.83 8.22 6.71 12.42 11.32 4.01 0.80 18.63 4.21 16.73 - 1960 1 0.10 0.50 3.59 4.29 0.40 0.00 0.00 0.20 0.00 0.20 0.10 1.70 0.60 0.40 0.20 0.10 5.19 16.18 0.80 10.18 13.48 19.57 1.90 0.60 0.30 0.10 0.60 3.79 2.80 32.75 8.79 - 1960 2 9.80 15.70 16.90 4.50 0.40 0.00 0.10 0.00 0.20 0.80 0.20 0.90 2.50 0.90 1.10 0.60 6.40 4.80 4.40 3.90 4.80 2.90 0.80 9.50 3.20 14.30 15.30 0.90 17.10 -99.99 -99.99 - 1960 3 3.10 10.08 8.49 0.70 0.00 0.00 0.00 0.00 1.20 2.30 0.70 0.90 5.29 3.59 2.40 0.10 0.00 4.39 10.38 6.79 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.30 0.00 1.60 0.10 - 1960 4 6.41 10.71 8.41 10.21 10.01 1.60 3.90 15.01 8.01 2.80 7.31 21.82 9.21 4.40 0.30 0.00 0.00 0.20 0.90 0.10 0.10 0.00 0.00 0.20 0.00 0.20 0.00 0.10 0.00 0.90 -99.99 - 1960 5 0.00 0.10 1.50 4.70 0.20 0.30 0.30 3.00 0.20 0.00 0.00 11.20 15.40 0.70 0.80 0.00 0.00 0.00 0.00 0.00 0.10 1.00 15.60 0.00 2.90 3.70 0.50 0.00 0.00 0.00 1.30 - 1960 6 2.89 0.50 0.00 0.10 7.88 6.88 9.87 3.49 0.10 3.59 9.77 8.17 0.30 1.79 6.88 0.80 0.10 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 0.00 -99.99 - 1960 7 0.10 0.30 2.10 5.40 14.01 4.90 1.50 0.60 0.20 9.31 6.31 3.20 4.30 4.70 1.90 12.71 3.90 5.30 0.70 2.60 3.90 1.20 3.50 1.10 6.81 0.70 4.10 2.00 3.90 0.20 0.90 - 1960 8 0.80 7.19 1.80 0.40 0.00 0.00 0.40 6.69 7.39 0.40 0.20 1.20 3.10 0.40 2.10 1.00 9.29 2.10 0.90 5.39 6.59 10.99 0.80 13.79 13.69 5.00 3.70 1.10 0.50 0.10 0.10 - 1960 9 0.10 8.31 4.61 0.50 3.50 8.31 0.00 7.71 0.00 2.80 3.00 1.00 18.72 10.41 0.50 6.61 1.00 2.90 1.40 0.30 1.60 0.50 0.10 0.70 0.00 0.00 0.00 0.40 0.20 0.20 -99.99 - 1960 10 3.09 23.55 7.09 0.60 7.48 0.30 1.30 0.50 0.00 0.00 0.30 0.40 0.10 0.10 0.00 0.10 6.69 6.09 12.87 1.50 1.40 2.59 1.40 0.30 1.00 1.30 1.00 1.30 0.30 7.19 7.78 - 1960 11 11.40 12.40 14.50 3.50 0.70 0.00 0.10 3.10 15.30 15.30 8.20 6.60 3.20 10.50 11.10 3.80 0.00 1.00 2.40 8.60 2.30 5.60 3.30 3.50 0.10 0.00 6.00 1.20 12.80 23.70 -99.99 - 1960 12 4.30 19.31 19.41 17.61 2.80 0.90 3.80 5.70 0.10 0.80 3.30 0.10 0.50 0.10 4.40 3.30 3.90 2.00 0.90 0.10 1.00 7.50 1.10 4.70 35.92 10.71 8.10 2.40 6.20 4.20 4.10 - 1961 1 9.08 1.70 2.00 1.20 5.49 0.10 8.09 7.39 1.20 0.00 21.86 6.59 0.70 0.20 0.00 0.20 0.70 6.69 0.20 0.00 0.10 0.10 0.00 0.00 0.00 11.98 7.39 12.18 9.18 5.59 1.20 - 1961 2 1.80 0.10 8.40 12.30 18.90 8.60 9.80 11.30 8.00 2.40 9.10 8.80 5.20 3.50 0.50 0.10 4.40 0.70 0.00 0.20 0.00 0.00 0.00 3.10 4.90 10.10 3.80 1.60 -99.99 -99.99 -99.99 - 1961 3 10.57 0.20 0.10 0.60 0.00 1.00 0.10 0.10 0.40 0.60 12.67 4.69 2.49 0.30 0.00 0.00 3.59 0.60 1.10 0.00 0.30 0.50 3.99 0.40 4.99 0.50 0.20 13.07 15.46 0.90 1.60 - 1961 4 2.60 0.20 0.10 4.39 17.87 0.30 0.10 4.59 5.09 1.30 9.68 11.98 2.20 0.90 0.00 0.50 0.10 0.80 12.98 14.27 2.40 4.99 2.10 0.40 6.29 2.89 1.50 0.20 2.10 0.80 -99.99 - 1961 5 7.39 3.69 8.58 1.70 3.99 6.49 13.28 3.09 0.10 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.20 1.30 0.00 0.20 2.20 2.89 0.10 - 1961 6 0.00 1.59 0.40 2.29 10.25 0.90 2.29 1.99 0.60 2.19 0.10 0.10 0.00 1.39 2.49 3.98 7.17 2.59 1.00 0.60 6.97 0.20 0.00 4.48 8.96 0.10 1.39 0.20 0.00 1.29 -99.99 - 1961 7 0.10 0.50 11.57 0.10 0.00 2.69 4.19 0.70 0.40 0.60 13.06 33.41 2.19 0.00 3.59 1.80 0.20 0.30 0.00 0.00 0.00 0.10 0.00 3.09 19.45 3.49 2.59 0.50 0.10 3.99 0.10 - 1961 8 0.70 0.70 34.38 1.10 6.60 1.50 7.10 44.67 5.30 2.10 0.20 6.00 0.80 0.80 2.90 0.60 5.20 6.00 1.80 9.49 7.70 0.40 0.90 8.99 10.79 6.00 3.70 0.20 0.10 0.00 0.00 - 1961 9 9.59 8.29 1.30 1.10 4.20 2.40 0.10 0.10 4.30 3.00 2.00 23.79 2.40 9.69 9.09 1.90 0.20 0.10 3.60 2.10 2.20 1.20 4.30 4.30 16.59 11.39 10.79 13.29 10.19 1.60 -99.99 - 1961 10 0.00 4.00 6.00 15.90 11.00 0.20 5.40 5.10 10.70 4.20 5.60 0.10 0.00 0.30 3.00 24.50 1.80 0.20 0.00 4.70 9.00 20.00 22.40 7.30 5.50 8.60 1.20 0.50 1.60 4.50 3.70 - 1961 11 10.39 4.80 0.30 2.00 11.19 3.70 18.39 1.20 1.50 0.40 0.20 0.60 0.20 0.00 0.40 0.20 0.10 0.10 1.80 0.00 1.50 10.89 11.19 9.39 9.69 1.00 0.40 11.79 12.99 2.60 -99.99 - 1961 12 3.10 1.10 0.90 16.40 3.50 1.80 0.70 10.10 8.30 22.80 7.10 6.80 9.20 0.10 1.10 0.10 0.00 0.00 0.00 0.10 0.00 0.20 0.00 0.00 0.10 0.10 2.60 5.70 0.60 0.50 1.50 - 1962 1 0.10 1.30 0.10 4.00 9.60 13.41 0.80 16.11 2.70 18.21 16.91 9.10 5.60 4.10 32.01 7.40 11.70 8.50 5.10 3.70 2.10 11.10 10.70 13.81 7.40 0.30 0.00 0.30 3.40 25.51 12.70 - 1962 2 1.00 2.81 4.92 9.33 11.74 8.43 1.81 3.41 2.51 3.51 39.24 3.11 0.10 3.81 8.23 1.51 1.20 0.90 0.20 0.00 0.00 0.00 0.00 0.40 0.80 2.31 0.70 0.10 -99.99 -99.99 -99.99 - 1962 3 0.60 0.50 0.00 0.10 0.00 0.00 0.10 2.20 1.90 1.00 0.10 0.00 0.00 0.00 0.00 0.20 0.00 0.00 0.10 0.10 0.00 0.00 0.10 5.60 15.00 0.20 0.10 9.60 8.40 1.90 5.70 - 1962 4 5.21 31.27 5.21 4.81 2.20 22.75 4.31 0.80 3.91 4.51 0.30 0.00 0.00 0.00 0.00 0.20 0.30 1.70 0.30 3.61 1.60 0.00 0.00 0.10 0.30 0.00 0.00 0.00 0.00 0.00 -99.99 - 1962 5 0.00 0.00 0.00 0.10 1.50 1.20 7.41 1.20 0.20 4.91 1.90 0.00 0.10 0.10 17.73 7.61 3.30 1.70 0.70 6.81 1.30 6.21 0.60 0.00 0.00 0.00 0.00 2.20 0.30 0.10 0.00 - 1962 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.70 0.20 1.90 1.10 3.79 1.40 2.00 0.00 7.19 7.19 10.39 2.80 10.59 2.50 10.88 3.00 4.59 0.40 0.10 0.00 0.70 0.20 -99.99 - 1962 7 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12.41 1.90 0.00 0.00 0.00 0.00 0.00 0.00 16.42 3.80 3.00 23.32 1.40 0.00 17.62 0.10 0.00 0.00 0.10 10.51 2.20 2.50 - 1962 8 0.30 4.10 15.19 5.90 2.70 3.60 2.70 2.50 11.69 32.18 3.10 0.50 0.00 2.50 13.49 4.30 0.10 2.60 8.80 6.20 10.59 7.50 13.29 4.30 9.89 19.89 4.40 0.30 0.70 1.40 0.00 - 1962 9 6.10 9.80 9.10 8.00 2.30 0.70 3.50 17.61 34.62 2.30 24.81 0.30 0.80 9.80 1.40 2.00 0.00 0.00 0.00 0.00 0.00 4.20 0.50 0.90 2.40 9.50 9.10 3.90 21.11 12.41 -99.99 - 1962 10 6.61 0.10 1.00 6.61 0.00 0.00 0.00 0.00 0.00 0.20 0.30 0.00 0.60 0.10 0.00 0.40 0.30 1.30 0.00 0.00 0.00 0.00 1.50 4.91 1.60 1.80 14.12 0.50 14.92 11.82 7.31 - 1962 11 9.61 4.61 2.70 2.40 2.30 1.60 3.30 0.40 0.30 0.10 0.20 0.00 3.20 3.40 3.80 16.92 2.60 0.10 0.40 0.20 0.10 4.41 19.42 0.70 0.00 0.00 0.60 0.80 1.80 0.30 -99.99 - 1962 12 0.80 0.00 0.30 0.10 0.00 0.50 35.67 15.09 1.90 7.39 3.10 0.70 1.70 12.89 7.59 0.50 5.60 4.10 17.39 4.50 0.70 0.40 3.10 2.60 7.99 0.20 0.00 0.00 1.10 3.60 0.50 - 1963 1 1.41 0.10 1.82 3.03 0.40 0.10 0.10 0.30 0.00 0.30 0.10 0.00 0.50 0.10 2.62 0.10 0.10 0.50 0.10 0.00 0.00 0.00 0.00 0.20 0.00 0.00 0.00 0.00 8.37 1.11 1.11 - 1963 2 0.50 0.10 0.00 0.20 2.10 5.40 12.60 0.30 0.10 0.10 0.00 0.00 3.20 10.00 2.80 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 -99.99 -99.99 - 1963 3 0.00 0.00 0.00 6.71 6.51 12.71 4.30 16.02 7.01 3.30 0.00 1.10 18.72 11.91 4.30 7.01 14.32 4.20 0.00 0.20 0.30 0.00 0.80 29.73 2.30 5.91 3.30 9.31 4.10 0.20 1.40 - 1963 4 4.42 0.50 0.50 1.00 2.11 0.00 0.00 0.00 0.20 2.31 2.01 4.32 4.22 7.83 0.70 5.82 9.94 2.01 0.00 6.32 12.34 1.91 0.90 0.60 0.00 1.20 2.71 0.40 0.80 8.03 -99.99 - 1963 5 0.20 3.70 0.40 8.70 6.70 7.90 11.00 3.00 5.20 7.90 5.40 9.50 5.00 1.40 0.50 10.80 3.00 4.40 4.90 10.10 4.50 0.00 0.80 4.70 0.60 1.10 0.00 0.00 0.00 0.60 0.00 - 1963 6 0.00 0.00 0.00 0.10 0.00 2.31 6.51 0.60 0.00 0.00 0.60 5.41 0.50 0.00 4.11 0.10 9.92 7.52 2.91 11.32 1.80 1.00 9.62 6.21 2.81 8.52 6.61 0.80 0.40 1.00 -99.99 - 1963 7 0.40 3.90 8.79 3.00 5.79 1.80 0.00 1.60 0.40 3.00 2.60 0.80 2.90 8.09 6.09 1.50 1.10 3.30 3.80 0.00 3.20 0.70 19.28 0.80 1.10 0.00 0.00 0.00 0.00 0.00 0.00 - 1963 8 0.30 4.70 0.10 5.00 10.10 9.00 7.40 1.10 8.50 4.10 1.00 0.40 0.20 1.00 1.50 9.90 1.90 0.90 1.00 5.50 0.30 1.30 14.20 1.40 4.70 9.10 0.30 0.00 4.60 6.30 4.30 - 1963 9 4.80 3.40 1.20 0.70 0.90 3.00 11.69 8.79 2.60 0.10 0.20 1.00 0.20 0.10 0.70 0.10 1.50 0.00 0.10 0.10 0.00 0.00 7.79 4.80 26.98 9.29 1.70 9.49 1.30 6.19 -99.99 - 1963 10 3.89 14.98 3.00 7.79 2.10 3.89 6.59 11.58 19.37 0.20 0.00 7.79 1.30 1.50 7.19 1.00 2.10 1.70 14.38 1.40 16.18 3.60 1.90 0.10 0.00 0.00 0.00 4.79 2.10 0.20 8.09 - 1963 11 3.10 1.50 0.90 1.20 2.90 0.90 2.40 5.39 2.10 29.57 15.99 14.39 9.19 0.70 0.10 0.10 20.68 6.69 2.40 12.39 17.98 14.29 28.77 6.39 5.00 0.00 6.59 2.10 0.10 0.40 -99.99 - 1963 12 0.40 0.00 0.00 0.00 0.00 0.20 0.50 1.10 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.10 0.10 0.00 1.00 0.00 0.20 0.00 8.48 3.99 2.00 2.00 6.19 13.08 11.48 1.90 - 1964 1 0.29 0.10 8.87 4.09 0.49 1.36 0.19 0.00 0.00 0.19 0.10 0.49 1.36 1.17 0.00 0.00 0.10 3.90 1.07 0.19 0.00 0.10 2.44 0.49 0.39 0.10 17.05 4.00 10.72 9.26 7.89 - 1964 2 1.08 4.30 6.01 2.06 0.09 1.08 0.00 0.45 2.15 2.42 0.99 0.00 0.00 0.18 0.09 0.00 1.08 0.36 0.00 0.00 0.72 3.23 6.28 1.97 0.27 2.24 2.69 0.18 0.00 -99.99 -99.99 - 1964 3 0.00 0.00 1.18 0.00 0.63 0.09 0.00 0.09 0.54 0.81 0.00 3.62 2.44 11.57 1.27 0.00 0.63 0.00 13.29 7.32 1.18 0.09 8.50 4.79 0.00 0.00 0.45 0.00 0.09 0.45 0.36 - 1964 4 0.00 0.09 0.09 0.00 2.40 0.09 2.49 1.11 1.85 0.74 8.03 1.66 10.25 2.49 2.95 2.12 2.49 1.02 14.96 1.57 3.69 0.46 1.85 1.48 5.54 3.05 7.29 1.66 3.97 3.79 -99.99 - 1964 5 2.05 8.49 8.49 4.29 3.64 4.67 18.30 3.17 6.63 9.06 6.16 2.61 2.61 0.19 0.00 0.19 0.09 7.00 2.15 2.33 7.28 0.19 0.00 0.00 0.00 0.00 0.00 0.00 3.92 0.28 0.09 - 1964 6 0.09 1.04 5.56 4.43 0.85 16.11 1.79 0.57 15.93 7.82 1.23 11.78 0.19 6.22 3.49 5.18 0.19 0.09 0.38 0.28 0.00 0.00 0.09 0.57 0.00 4.99 0.00 0.19 0.75 0.09 -99.99 - 1964 7 0.47 0.56 0.19 0.28 0.00 5.77 24.38 4.19 1.02 12.47 0.65 0.09 8.93 4.19 0.00 0.09 2.33 2.61 0.28 0.56 2.23 0.09 0.28 10.05 0.09 0.09 0.74 3.07 2.98 1.40 2.23 - 1964 8 0.29 0.68 0.39 0.29 4.76 1.56 1.46 1.75 0.19 0.78 0.00 0.19 0.00 7.19 3.01 10.79 25.86 8.17 0.39 0.68 7.78 11.28 10.89 11.57 6.42 5.25 8.65 1.94 0.10 0.00 0.00 - 1964 9 0.00 0.00 0.10 0.10 13.94 0.77 4.55 15.39 20.42 5.52 0.29 0.10 7.74 14.23 6.97 4.45 3.29 6.39 3.19 1.06 23.33 12.20 0.10 1.36 1.84 3.58 0.10 2.61 0.10 0.10 -99.99 - 1964 10 0.00 0.00 0.00 0.10 18.12 31.61 8.77 4.14 0.29 2.51 1.06 2.60 14.36 3.37 0.67 0.48 0.48 5.11 0.77 1.54 0.10 7.13 0.96 3.37 0.96 2.60 0.00 0.00 0.00 0.00 0.10 - 1964 11 0.00 0.09 0.09 0.00 0.56 0.19 0.09 0.09 0.00 0.19 7.59 9.56 10.12 15.65 13.78 4.87 2.44 15.09 4.87 1.78 0.28 3.28 5.15 5.53 1.41 7.69 10.78 4.50 1.41 5.81 -99.99 - 1964 12 0.47 1.87 0.65 4.48 7.47 5.60 17.56 17.65 3.08 2.43 26.43 13.45 3.46 2.24 4.20 1.87 0.09 0.00 1.31 0.47 0.28 0.09 0.09 0.09 1.68 7.66 0.37 13.64 18.31 7.29 11.30 - 1965 1 0.30 0.00 0.00 0.90 0.40 9.89 10.19 17.68 16.28 17.38 7.39 5.59 22.37 7.79 7.49 14.98 6.19 0.00 0.10 0.00 3.10 4.19 11.19 0.20 0.30 1.00 0.30 0.20 0.00 0.00 0.00 - 1965 2 0.00 0.10 0.00 0.00 0.00 0.10 0.10 0.10 0.20 0.51 8.55 6.01 0.10 0.10 0.00 0.00 0.31 0.10 0.71 0.20 0.20 0.00 0.51 0.00 0.31 0.00 0.41 3.77 -99.99 -99.99 -99.99 - 1965 3 0.20 1.10 5.00 0.50 0.10 1.00 3.60 0.20 0.00 0.00 0.00 0.30 4.80 2.60 9.60 0.40 0.50 4.30 3.90 0.40 4.70 2.50 1.90 11.00 8.00 21.40 2.70 0.00 0.00 0.00 0.00 - 1965 4 0.00 0.00 0.60 0.50 0.10 2.40 3.81 0.80 16.73 6.91 11.92 2.91 0.60 12.83 0.60 15.53 2.91 3.01 0.20 0.00 0.00 1.60 0.10 1.00 4.91 3.61 1.20 1.10 1.80 0.00 -99.99 - 1965 5 0.00 2.40 3.30 1.40 0.20 3.30 9.01 6.51 1.30 0.00 0.20 0.00 0.00 5.81 0.20 3.00 9.71 0.50 0.00 0.20 4.21 8.51 4.81 5.81 5.21 1.20 4.21 0.00 0.00 0.00 0.00 - 1965 6 0.00 0.00 0.00 5.11 1.90 3.61 0.20 0.00 0.00 0.10 1.20 4.81 0.00 12.42 12.22 1.00 13.22 4.21 1.50 7.91 0.80 4.01 7.61 22.14 9.92 0.40 0.30 0.80 0.10 0.00 -99.99 - 1965 7 0.10 0.00 0.10 0.10 0.30 0.10 1.00 0.80 0.00 11.20 3.60 2.90 5.00 0.10 0.00 0.00 0.00 0.00 0.10 5.40 2.70 1.40 7.60 5.50 0.40 0.70 21.20 32.00 12.20 5.10 4.00 - 1965 8 0.70 3.50 0.00 28.90 7.50 3.40 1.10 0.00 0.00 0.00 0.10 0.00 0.00 5.70 0.00 0.50 3.90 0.60 2.20 21.20 7.10 2.40 1.40 11.50 3.80 0.00 4.10 8.20 2.90 8.00 0.40 - 1965 9 0.00 0.10 6.40 1.30 5.40 18.30 3.40 3.30 6.60 0.70 0.00 0.20 0.20 22.80 3.10 1.00 20.10 0.00 0.30 2.30 10.90 0.90 7.90 17.50 27.30 2.50 0.10 1.10 1.70 0.40 -99.99 - 1965 10 12.70 2.90 3.80 9.80 1.40 1.20 0.10 0.00 0.00 0.00 0.00 0.00 0.30 7.70 0.30 1.20 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13.70 20.90 11.20 10.90 7.80 38.90 - 1965 11 10.70 2.00 0.10 0.10 0.00 0.30 7.80 1.70 0.20 0.10 1.10 2.20 0.20 0.30 0.00 2.20 2.60 2.20 11.40 2.00 0.00 7.60 8.80 2.40 1.40 6.00 0.90 1.80 1.70 0.10 -99.99 - 1965 12 13.40 7.20 0.30 14.40 5.30 0.50 10.90 15.00 9.60 1.50 1.90 2.60 4.60 7.80 2.40 3.80 16.20 2.60 2.40 2.00 3.20 10.30 1.70 0.30 0.00 0.10 0.00 4.90 9.60 3.60 10.10 - 1966 1 11.30 2.20 0.10 4.90 12.60 3.10 0.20 0.80 0.00 0.00 0.10 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.30 0.20 0.00 11.30 6.00 6.80 13.40 12.70 0.20 0.00 - 1966 2 2.70 5.90 5.80 26.32 7.11 5.80 3.90 1.30 1.30 0.50 0.10 0.80 0.30 0.00 0.00 0.00 0.00 13.31 6.60 9.31 2.90 10.71 2.00 11.91 7.01 5.60 7.81 0.00 -99.99 -99.99 -99.99 - 1966 3 16.19 2.70 4.20 2.20 0.40 0.80 9.59 1.50 10.29 5.40 2.30 1.60 0.70 0.50 0.10 0.20 4.80 0.00 0.00 1.90 1.90 5.90 1.90 0.60 15.99 19.68 0.60 0.10 2.70 2.60 8.19 - 1966 4 0.00 0.00 0.00 0.70 8.38 0.30 0.10 3.09 9.97 1.90 0.70 0.10 0.00 0.40 0.40 0.10 0.00 2.89 2.79 2.39 8.38 16.96 3.29 1.70 3.19 6.68 1.60 0.40 0.60 0.00 -99.99 - 1966 5 0.50 0.70 0.90 7.01 2.90 1.60 7.81 1.40 0.10 5.11 4.70 0.40 2.60 1.10 0.00 13.11 1.70 1.30 9.41 2.80 16.32 5.71 1.30 10.01 2.70 0.00 0.00 0.00 0.00 0.00 0.00 - 1966 6 0.10 0.00 6.71 15.81 0.40 4.20 0.40 0.00 8.71 0.20 0.00 1.00 8.71 1.60 4.10 2.70 5.10 3.60 3.60 5.60 6.40 16.51 18.71 0.80 1.00 10.21 1.60 0.00 0.50 0.00 -99.99 - 1966 7 0.10 0.50 0.00 0.00 1.61 0.70 0.30 0.40 2.51 4.92 1.00 4.32 1.41 0.00 1.71 0.00 0.00 0.00 0.00 0.00 0.00 1.20 0.90 1.31 11.85 3.71 5.92 2.91 4.02 0.60 0.00 - 1966 8 0.80 0.20 7.32 5.22 0.10 0.20 0.10 1.30 18.15 6.12 5.22 1.10 32.59 0.10 0.00 4.91 0.90 1.60 0.90 11.43 2.31 0.70 0.10 0.00 0.00 0.00 0.10 0.00 2.21 0.30 0.00 - 1966 9 14.23 1.80 32.47 3.41 2.81 0.40 0.40 1.70 21.65 5.81 10.82 5.91 6.92 13.83 0.20 4.31 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.30 1.40 0.30 7.42 -99.99 - 1966 10 8.90 0.70 9.40 0.00 23.80 6.20 8.00 0.40 1.60 4.20 1.10 0.20 7.40 0.50 0.40 3.40 4.00 1.60 4.40 4.50 2.70 2.60 0.20 0.00 1.30 0.40 0.50 0.50 0.50 2.40 1.60 - 1966 11 2.40 0.00 2.60 2.40 0.80 1.40 0.80 2.10 1.40 0.30 16.69 6.89 7.89 13.19 11.79 0.30 0.00 0.20 0.00 0.00 0.10 0.00 0.90 7.09 4.60 5.10 4.60 2.40 18.39 13.09 -99.99 - 1966 12 12.91 1.40 0.60 1.90 8.50 1.30 17.91 7.20 3.40 0.50 8.00 2.40 0.40 6.10 10.71 2.00 23.51 3.00 26.81 3.60 2.30 5.20 1.80 2.40 2.40 9.70 3.70 11.81 5.80 3.10 7.60 - 1967 1 0.50 0.10 0.20 0.00 0.90 1.10 0.60 0.30 0.50 0.10 0.10 0.10 0.80 0.00 0.00 2.70 12.69 13.19 7.99 7.69 8.79 7.89 4.60 4.00 13.79 7.39 4.20 2.20 3.40 3.40 9.19 - 1967 2 3.60 10.79 4.80 0.90 0.20 2.80 0.00 0.40 0.50 0.00 0.00 0.00 0.00 0.00 2.60 7.09 4.40 9.79 16.19 7.09 3.40 9.69 1.80 6.00 7.49 9.79 19.49 9.49 -99.99 -99.99 -99.99 - 1967 3 13.01 2.70 0.30 10.31 10.11 4.40 2.30 2.50 8.41 5.70 10.51 1.10 3.40 5.00 6.30 7.91 3.90 1.00 0.50 2.40 1.10 4.80 2.40 8.61 8.81 6.60 2.00 2.10 0.60 0.60 0.00 - 1967 4 18.57 5.09 2.20 2.20 0.10 0.80 1.00 0.50 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.70 0.10 1.90 5.79 1.90 1.10 3.49 5.99 7.39 0.00 2.30 0.30 0.20 0.90 1.90 -99.99 - 1967 5 1.00 1.20 10.22 3.51 3.71 6.61 5.21 3.71 1.30 0.80 5.51 0.70 1.70 0.90 7.91 6.21 3.31 11.72 3.31 3.51 13.22 12.82 3.61 4.71 1.90 1.40 5.01 5.11 0.20 0.30 0.00 - 1967 6 0.30 5.11 2.50 0.10 2.90 9.51 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11.42 1.70 6.81 7.51 0.30 2.00 5.51 0.00 4.51 5.21 0.30 0.00 -99.99 - 1967 7 5.11 5.11 4.81 0.10 0.10 2.90 14.32 0.70 1.00 0.80 0.80 0.90 8.51 13.72 0.40 2.30 10.82 5.61 1.90 1.00 0.10 0.00 7.71 1.60 3.91 5.41 0.60 0.80 7.51 1.40 8.01 - 1967 8 0.90 1.10 5.51 1.80 0.40 0.70 0.00 4.91 2.40 3.30 16.72 2.30 0.60 13.92 6.31 5.01 0.50 0.70 0.00 0.00 0.00 0.30 0.00 0.20 0.80 0.40 6.71 4.41 2.70 3.90 0.00 - 1967 9 6.40 18.60 7.20 19.20 6.70 0.70 0.00 0.10 0.40 12.40 13.60 0.00 0.00 0.00 0.00 0.10 3.50 4.00 0.80 0.40 0.30 0.10 1.10 11.80 5.00 4.40 0.50 3.70 7.90 10.20 -99.99 - 1967 10 23.68 12.99 11.59 0.10 3.00 22.38 9.79 28.98 14.69 2.20 3.00 5.30 15.09 7.29 5.90 6.99 4.10 17.99 8.29 1.60 0.40 5.40 7.99 15.49 15.49 8.39 2.20 0.40 0.90 3.90 1.20 - 1967 11 11.49 3.10 2.60 0.40 0.30 0.70 1.90 10.69 3.00 12.99 4.30 10.59 10.69 6.79 0.30 0.00 0.10 0.20 0.50 0.30 0.00 0.00 0.20 4.70 1.20 5.10 10.19 5.89 3.20 1.00 -99.99 - 1967 12 0.60 1.30 0.00 3.10 1.60 0.30 0.10 0.50 0.40 4.10 3.50 0.20 0.90 1.70 4.10 0.10 0.80 0.00 0.00 15.22 12.11 16.12 4.80 8.81 0.10 3.30 4.80 0.90 3.10 4.30 3.00 - 1968 1 5.99 6.98 1.00 1.30 3.19 0.10 0.20 2.89 0.20 1.90 0.00 3.29 10.48 16.46 7.08 13.97 5.59 8.98 0.70 0.20 0.20 0.40 0.50 1.50 0.50 0.90 2.29 1.40 9.18 9.68 13.57 - 1968 2 5.98 3.95 9.19 11.59 8.28 2.94 6.16 4.23 0.18 0.18 0.00 5.88 0.28 0.00 0.28 2.21 0.46 0.09 6.34 2.39 0.00 0.09 3.13 0.18 0.00 0.00 0.00 0.09 0.09 -99.99 -99.99 - 1968 3 0.10 0.10 0.10 0.40 0.20 0.10 0.00 0.00 0.20 0.00 0.00 3.40 3.40 9.30 0.90 18.20 13.50 8.80 9.70 2.20 6.80 11.70 8.80 0.80 2.90 19.60 0.70 0.20 0.30 3.40 16.40 - 1968 4 11.10 2.90 0.90 0.30 0.10 1.20 0.00 0.40 0.10 0.00 0.00 0.00 0.00 0.00 0.00 8.40 4.30 6.10 10.10 0.20 5.60 0.50 0.30 0.00 0.00 2.20 5.90 0.50 0.50 0.90 -99.99 - 1968 5 6.01 9.61 9.71 15.01 16.82 0.80 0.60 0.00 10.51 3.90 2.80 3.20 0.70 12.21 2.60 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.80 3.00 0.10 0.00 0.00 2.40 1.20 - 1968 6 2.61 0.60 1.10 0.00 1.30 1.90 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 6.21 3.91 1.00 3.11 5.51 7.01 9.12 2.81 0.00 0.00 2.10 0.80 -99.99 - 1968 7 18.88 29.52 7.23 0.30 0.40 0.70 1.00 5.02 0.10 1.81 0.60 0.20 0.00 0.50 2.41 0.80 0.30 0.40 1.10 0.90 0.10 0.90 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.80 - 1968 8 0.00 0.00 0.30 0.40 1.90 0.00 0.10 0.00 0.00 0.00 0.00 17.00 20.70 0.90 4.50 0.40 1.40 0.30 17.10 1.20 0.50 8.40 0.60 0.00 0.00 0.00 0.80 0.20 0.00 6.30 1.30 - 1968 9 11.11 11.21 5.40 0.10 8.30 0.00 0.40 4.40 0.40 8.30 7.20 18.41 0.70 0.10 0.00 0.00 0.00 0.00 10.51 1.30 0.10 10.81 0.50 0.00 11.61 7.00 17.01 10.61 8.71 15.81 -99.99 - 1968 10 20.78 4.30 5.29 1.30 3.00 0.50 0.80 0.20 21.38 7.09 18.08 9.69 4.20 4.20 10.39 7.79 0.10 0.10 20.88 0.20 0.00 0.00 0.00 0.00 1.20 3.70 9.79 6.89 8.49 7.49 25.38 - 1968 11 3.00 0.00 0.00 0.00 0.40 0.00 0.30 0.10 0.40 4.50 2.30 0.00 0.00 0.00 0.00 0.00 0.00 0.10 1.10 10.79 30.27 11.19 8.39 8.09 2.70 0.30 14.89 0.10 0.00 0.50 -99.99 - 1968 12 0.56 0.56 4.74 3.53 3.53 0.37 0.00 0.19 0.00 0.09 0.00 2.79 0.84 0.93 2.97 5.39 0.84 1.02 11.71 4.18 12.55 10.97 0.46 0.37 0.09 0.09 0.09 0.00 0.09 0.09 0.74 - 1969 1 0.30 0.60 4.10 6.50 2.00 1.90 6.00 5.90 0.20 7.90 1.00 13.50 0.80 3.50 1.40 1.90 4.20 3.50 0.40 15.50 5.30 0.10 5.60 4.20 2.00 3.90 8.30 0.60 3.40 6.80 4.60 - 1969 2 5.81 0.00 0.60 0.80 10.82 2.40 0.50 0.90 0.40 11.22 1.90 0.20 0.30 0.40 0.10 0.10 1.90 0.90 0.10 0.10 0.10 9.22 0.30 0.60 1.50 0.20 0.20 0.20 -99.99 -99.99 -99.99 - 1969 3 0.20 0.40 0.00 0.00 0.00 0.00 0.10 1.00 0.00 0.20 0.00 1.10 3.20 0.90 0.60 0.10 2.40 3.70 3.20 0.30 0.10 0.00 0.00 0.20 0.00 0.00 0.00 0.40 4.60 6.00 3.50 - 1969 4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.31 5.13 1.81 10.06 1.11 1.61 10.16 1.11 0.70 0.00 0.00 0.00 0.20 3.72 0.60 3.52 6.23 2.82 1.01 0.70 0.40 0.10 0.40 -99.99 - 1969 5 0.20 4.50 0.30 1.30 0.20 9.91 11.91 6.31 0.80 4.20 3.10 4.30 8.81 0.90 1.80 0.20 1.30 0.30 0.00 0.00 0.00 0.00 0.90 15.82 0.90 7.71 2.80 2.20 0.70 3.20 1.40 - 1969 6 0.40 10.71 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.71 6.51 11.61 8.81 3.50 7.11 4.20 2.70 2.90 1.20 0.90 10.51 1.20 2.00 1.30 2.10 -99.99 - 1969 7 4.49 0.00 6.79 1.80 1.60 1.00 3.70 0.40 0.60 4.39 1.40 0.20 0.00 0.00 0.00 1.00 2.30 11.48 0.50 2.50 2.60 9.59 0.30 0.00 6.89 7.09 2.50 1.00 0.00 0.00 0.00 - 1969 8 0.20 3.61 6.91 3.00 1.20 0.00 0.00 13.02 6.11 0.70 1.30 0.10 0.90 1.90 0.00 0.90 0.80 0.70 7.01 3.31 3.91 1.60 0.00 2.50 2.80 0.90 0.10 0.00 0.00 0.00 0.10 - 1969 9 0.00 0.20 0.00 0.10 0.10 0.30 0.60 0.50 11.10 8.30 5.30 0.00 0.00 0.00 0.00 0.10 3.10 1.70 0.00 6.80 11.20 0.30 2.40 4.20 8.20 11.60 3.00 6.20 2.60 1.90 -99.99 - 1969 10 13.34 5.92 2.11 0.10 3.31 2.21 2.61 14.54 0.80 0.00 0.00 0.00 9.43 1.70 1.40 3.91 0.20 0.20 0.40 0.50 0.80 6.42 18.65 5.11 0.40 0.40 0.40 2.21 1.81 2.91 1.00 - 1969 11 18.41 18.21 3.60 6.70 13.31 4.70 14.01 9.91 8.50 7.60 1.20 7.10 2.90 2.70 2.30 0.20 0.90 3.10 6.20 7.20 7.10 11.41 0.20 0.00 0.10 2.40 10.11 0.60 0.40 2.70 -99.99 - 1969 12 2.80 8.49 0.10 0.00 2.60 4.00 0.50 0.70 0.50 6.10 0.30 0.10 27.98 12.49 2.50 2.60 3.30 5.00 5.80 14.69 18.49 1.50 6.89 0.40 0.40 0.40 0.00 0.00 0.20 0.10 0.20 - 1970 1 0.70 0.40 0.40 0.40 4.20 0.00 0.00 1.80 10.19 0.80 10.99 3.90 1.40 6.29 4.80 0.40 12.69 4.40 8.39 3.50 7.19 0.60 0.50 8.79 3.50 5.20 0.10 0.00 1.50 1.20 7.89 - 1970 2 21.42 5.40 1.00 0.30 0.00 8.81 6.10 5.30 1.90 0.30 0.10 0.10 0.00 0.00 0.10 7.81 2.50 8.01 19.72 5.40 13.41 11.71 5.30 0.70 0.20 0.40 0.20 0.50 -99.99 -99.99 -99.99 - 1970 3 0.90 0.20 1.90 0.30 0.50 0.40 0.70 0.40 0.20 7.99 3.59 0.80 0.10 0.10 0.00 8.09 5.09 1.70 10.58 1.60 2.00 0.60 0.00 0.00 2.20 0.50 0.60 2.80 13.58 1.30 0.10 - 1970 4 0.20 3.00 0.40 0.70 10.21 0.10 0.20 0.60 0.50 0.90 1.70 0.00 0.00 3.70 3.80 13.21 8.01 0.70 2.80 5.91 17.52 21.82 1.70 0.70 0.30 0.10 0.70 1.90 0.30 0.30 -99.99 - 1970 5 6.81 1.50 0.00 0.60 7.71 0.30 5.61 0.50 1.90 0.00 0.00 0.00 1.70 1.60 1.20 0.00 0.10 2.90 1.20 8.21 0.20 0.80 0.40 8.21 5.61 0.30 0.50 0.30 1.30 1.50 4.31 - 1970 6 1.50 0.20 0.00 0.00 0.00 0.20 0.20 0.40 0.00 5.69 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 2.00 3.79 10.18 12.08 0.20 5.89 7.29 3.30 3.30 9.79 -99.99 - 1970 7 6.90 0.80 0.90 0.60 13.90 4.50 0.20 9.30 0.40 5.30 7.00 6.00 2.50 0.40 0.10 0.10 0.50 4.80 0.80 3.00 2.40 2.90 14.80 8.90 0.50 3.60 3.20 3.30 3.30 15.00 0.80 - 1970 8 0.30 0.00 0.10 0.00 0.00 0.00 0.00 0.50 6.41 0.10 1.80 1.80 13.42 2.40 30.33 5.21 0.00 0.00 2.80 7.51 0.20 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.20 2.90 11.91 - 1970 9 9.20 10.50 1.40 1.10 0.80 2.00 21.40 9.20 16.80 6.20 2.10 1.70 8.30 0.70 0.30 9.90 20.00 0.70 7.10 2.00 0.10 0.50 0.00 0.40 10.70 3.10 0.80 10.00 11.00 4.30 -99.99 - 1970 10 21.74 2.50 6.61 16.23 9.92 6.21 1.80 0.10 1.70 0.10 2.20 0.00 0.20 0.00 0.00 0.00 1.20 9.52 0.40 0.00 0.00 0.40 9.32 11.82 6.51 3.41 7.11 11.32 22.34 3.71 26.64 - 1970 11 7.71 24.12 9.81 15.22 0.50 0.00 4.60 5.41 0.70 10.31 11.11 11.61 7.21 0.30 8.91 3.20 5.71 1.40 2.00 5.31 1.30 2.60 31.43 1.70 2.20 10.91 9.71 3.70 1.90 0.50 -99.99 - 1970 12 3.60 17.82 3.00 6.11 7.61 2.10 0.00 0.10 0.00 0.20 0.80 1.90 0.70 0.80 1.00 12.82 4.41 5.91 2.90 0.30 0.20 0.00 1.00 0.20 0.70 0.30 0.30 0.70 0.30 0.10 0.00 - 1971 1 0.50 0.10 0.00 0.60 6.41 15.13 2.81 4.41 7.21 0.00 0.00 0.00 0.70 0.10 0.30 2.20 3.81 10.12 4.91 5.51 0.60 6.41 6.51 11.12 5.11 1.00 4.01 5.61 2.91 0.30 0.00 - 1971 2 0.50 3.10 0.20 0.10 0.00 0.40 0.00 2.10 0.70 3.50 17.82 21.72 12.91 12.41 2.80 6.01 6.61 0.10 7.31 4.50 0.80 0.40 1.00 0.50 0.10 0.20 9.31 2.30 -99.99 -99.99 -99.99 - 1971 3 12.60 0.10 0.00 0.70 0.30 0.00 0.00 0.00 0.50 1.40 3.00 3.50 3.50 0.20 0.00 0.70 0.40 4.40 5.50 0.10 0.00 0.80 4.30 5.70 3.40 0.60 2.60 6.10 4.10 2.20 0.90 - 1971 4 0.00 0.10 0.30 1.80 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.11 0.20 4.51 0.70 0.50 0.40 0.80 9.42 28.65 3.21 0.00 0.10 0.00 0.00 0.10 2.70 -99.99 - 1971 5 0.00 0.00 0.00 0.00 0.00 4.01 2.81 0.50 1.70 0.10 0.30 0.00 0.00 4.11 5.62 5.11 4.61 0.60 0.30 0.00 0.10 5.82 12.24 0.40 0.20 7.02 3.91 0.60 3.51 4.21 0.50 - 1971 6 0.00 0.00 0.00 0.00 0.00 0.10 0.20 0.40 0.30 0.20 4.70 0.00 1.10 0.40 1.30 0.70 0.10 3.20 5.50 8.10 9.50 0.00 1.50 4.70 7.60 6.00 4.50 0.40 0.20 2.70 -99.99 - 1971 7 0.40 0.40 8.70 2.80 0.00 0.00 0.00 0.10 0.00 0.00 0.10 0.00 0.00 0.20 0.20 0.00 0.00 0.00 0.00 0.20 4.50 8.60 10.50 24.00 7.80 4.00 4.10 0.00 0.00 10.80 7.80 - 1971 8 5.81 0.70 6.21 5.41 10.72 7.21 0.40 4.31 0.70 0.00 0.10 6.31 12.42 0.90 0.40 0.10 0.00 0.00 0.00 0.00 0.10 2.71 0.80 0.40 0.20 6.71 7.62 2.10 6.51 5.11 7.51 - 1971 9 4.01 9.72 1.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.81 7.01 0.20 0.70 1.60 0.30 1.50 0.60 0.20 0.20 0.60 0.00 3.11 2.81 0.10 4.21 5.41 0.40 -99.99 - 1971 10 0.20 0.00 0.00 0.00 5.00 4.00 7.50 1.40 9.11 21.51 3.20 0.20 0.00 0.90 14.91 5.20 20.51 8.21 16.11 13.31 15.81 2.30 6.10 0.80 0.00 0.00 0.00 1.50 2.00 3.20 0.30 - 1971 11 2.40 3.60 3.30 25.52 2.30 4.80 8.51 0.20 0.20 0.10 0.50 2.70 0.30 1.80 5.90 1.10 12.41 0.70 0.30 18.82 0.20 2.70 0.30 1.60 1.00 4.40 2.90 1.80 6.31 2.00 -99.99 - 1971 12 0.50 0.00 5.31 0.60 0.00 0.10 1.40 0.90 1.00 0.10 1.20 7.01 5.01 6.31 0.90 0.40 0.00 11.51 10.31 17.02 3.00 3.50 4.81 0.90 2.00 8.41 0.00 0.60 0.10 0.00 0.00 - 1972 1 2.10 2.30 0.50 0.10 0.00 0.00 2.10 10.49 2.20 12.99 17.49 6.30 7.00 0.40 2.50 4.80 6.50 24.18 3.10 4.10 3.10 3.80 11.09 5.70 5.10 5.60 0.40 0.20 0.20 0.10 0.10 - 1972 2 8.39 6.19 2.10 0.10 2.50 0.60 0.30 2.70 4.00 10.09 4.89 9.19 2.00 1.90 15.48 3.10 0.70 0.00 0.00 0.10 0.10 0.00 0.00 0.00 5.29 0.80 3.40 2.90 2.80 -99.99 -99.99 - 1972 3 1.40 9.12 7.82 5.91 0.20 1.20 0.80 0.10 0.10 0.10 0.00 0.00 0.00 1.20 0.00 5.61 0.50 1.30 3.91 0.00 0.20 0.50 0.00 0.00 4.01 12.03 4.31 1.20 11.43 1.30 7.22 - 1972 4 19.69 14.39 5.10 6.70 9.39 2.40 11.89 9.99 13.09 7.99 0.70 1.30 0.10 5.00 0.60 0.80 0.30 0.00 0.00 0.00 0.60 0.10 0.00 0.00 0.00 0.00 0.40 23.09 12.59 8.29 -99.99 - 1972 5 2.00 0.00 1.00 0.00 6.60 6.30 6.10 5.10 2.10 0.40 9.90 4.10 0.10 0.10 0.00 0.00 0.00 0.10 0.50 3.30 4.50 0.60 2.50 7.70 26.30 10.20 1.20 2.50 12.80 4.80 3.70 - 1972 6 5.90 10.11 4.90 10.01 4.30 1.20 11.61 0.60 1.60 1.50 0.20 1.30 0.00 0.80 0.00 0.00 24.52 3.80 1.90 12.81 4.70 3.40 4.70 7.31 0.40 3.60 9.71 1.60 0.60 1.70 -99.99 - 1972 7 1.00 0.20 3.51 9.82 2.10 4.31 0.10 0.30 2.40 3.00 0.10 6.31 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 8.61 4.41 8.11 0.00 0.00 0.00 0.60 0.40 2.30 3.81 - 1972 8 1.11 0.10 9.25 3.72 3.52 6.73 11.96 8.04 0.80 0.50 0.30 0.20 0.70 0.00 0.00 9.65 1.31 0.00 1.41 0.00 0.00 0.10 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1972 9 0.00 0.00 0.00 0.00 0.90 3.02 0.50 1.01 0.40 1.31 0.80 3.22 0.20 0.00 0.00 0.00 0.00 0.00 0.00 6.53 1.91 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1972 10 0.00 0.00 0.00 0.00 0.00 0.00 0.30 10.90 8.60 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.00 3.10 3.60 0.30 2.40 0.10 1.50 9.70 6.50 7.30 0.50 0.70 - 1972 11 2.80 3.00 0.10 0.40 1.60 5.69 0.90 11.29 21.27 10.59 6.59 4.09 0.10 6.39 5.49 0.50 0.10 9.59 11.59 1.40 0.50 0.40 0.00 1.00 2.50 0.30 10.49 9.99 18.78 15.48 -99.99 - 1972 12 4.50 0.20 9.01 13.41 6.40 6.00 3.20 0.20 9.11 8.41 24.02 8.41 3.90 0.50 0.00 2.50 0.00 0.00 0.00 0.60 0.30 0.40 2.80 0.20 2.00 5.10 3.30 3.90 2.00 7.31 12.71 - 1973 1 8.60 6.80 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.30 0.00 0.30 2.10 5.30 14.20 1.90 0.10 3.10 18.00 21.60 9.30 3.70 6.40 1.20 13.50 3.10 1.30 1.80 5.60 5.80 1.80 - 1973 2 0.40 0.40 0.60 3.50 5.81 4.50 8.51 8.41 3.20 1.10 13.01 9.71 6.41 1.60 0.50 0.00 0.00 2.30 2.00 1.20 3.50 8.01 2.20 0.00 0.00 0.10 4.50 7.01 -99.99 -99.99 -99.99 - 1973 3 6.45 6.95 2.88 2.38 3.97 0.30 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.49 0.40 4.07 0.50 0.00 6.55 0.30 14.29 3.57 9.33 - 1973 4 4.48 0.40 18.41 8.36 8.76 2.39 0.90 0.90 0.30 0.10 0.30 0.30 0.00 0.10 0.00 0.00 0.00 0.10 0.10 1.19 2.79 0.30 0.90 0.30 0.00 0.60 0.90 0.00 4.78 7.17 -99.99 - 1973 5 0.10 0.20 7.38 13.27 4.79 3.39 2.39 2.39 9.58 3.09 1.40 15.17 0.30 0.40 0.00 0.00 0.00 0.00 0.20 3.39 7.48 2.20 0.20 1.40 1.80 1.00 3.59 0.40 4.09 1.00 0.40 - 1973 6 4.71 3.30 0.50 0.10 0.00 0.00 0.00 0.50 3.00 0.20 2.80 14.02 1.10 2.10 0.00 3.60 0.70 28.64 0.70 0.00 0.00 0.00 0.30 0.60 0.30 0.30 0.00 3.40 1.20 5.51 -99.99 - 1973 7 12.06 1.21 6.03 0.50 2.71 2.41 0.20 0.00 0.20 7.24 1.01 7.84 7.34 6.53 0.00 0.00 0.70 5.83 3.92 4.22 2.01 1.41 0.20 0.00 0.00 0.00 0.00 1.81 0.20 0.30 0.60 - 1973 8 3.41 3.61 5.02 9.83 6.52 11.04 2.81 7.02 5.82 1.30 0.00 0.00 0.00 0.00 0.00 0.30 0.10 1.50 4.11 0.00 0.00 1.30 0.20 0.00 0.00 3.51 1.50 2.91 3.91 7.52 8.53 - 1973 9 1.70 1.20 6.19 7.89 0.70 3.30 0.00 0.00 0.50 0.00 0.00 0.00 0.00 0.00 1.40 8.09 4.69 0.90 0.00 0.10 2.10 0.40 0.00 5.59 3.50 0.20 20.07 9.79 0.70 0.60 -99.99 - 1973 10 0.50 0.00 0.00 0.10 0.00 0.70 2.31 11.43 4.51 0.40 0.00 0.00 0.10 0.00 0.00 0.30 0.80 14.84 4.01 11.23 7.22 1.10 1.91 1.00 2.61 0.30 0.40 0.00 0.00 0.00 0.20 - 1973 11 0.60 1.50 1.80 14.23 1.40 0.20 3.51 15.44 1.30 5.41 19.04 8.52 4.81 3.11 1.90 0.80 14.33 8.72 0.10 0.20 3.01 0.60 10.02 0.70 0.60 0.60 0.30 5.01 0.00 0.00 -99.99 - 1973 12 0.00 1.20 2.80 1.00 4.30 0.30 11.79 0.10 1.20 11.59 6.60 15.29 2.10 0.60 22.49 3.90 1.00 4.00 22.29 5.40 3.40 8.69 1.00 0.50 8.59 1.10 3.50 2.10 11.39 0.20 0.40 - 1974 1 3.60 0.00 1.40 16.10 8.70 9.50 5.00 13.00 2.50 11.50 10.80 6.10 7.10 6.70 4.50 9.30 29.10 5.80 0.10 0.10 0.20 9.10 6.30 1.90 8.60 10.70 4.40 9.70 26.30 12.70 2.80 - 1974 2 5.00 5.60 1.10 4.50 3.20 0.50 4.50 15.60 9.80 1.80 7.50 2.70 1.30 12.50 11.70 0.80 0.00 0.40 0.90 3.10 2.60 2.70 1.20 0.70 0.50 0.50 0.50 10.00 -99.99 -99.99 -99.99 - 1974 3 1.70 0.20 1.10 0.50 0.20 13.80 0.20 0.00 0.30 0.10 0.20 0.60 1.60 7.90 7.30 5.10 7.50 2.10 6.80 0.00 12.10 0.10 0.00 0.60 2.20 0.10 0.00 0.00 0.00 0.00 0.00 - 1974 4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 6.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.59 2.97 0.10 0.20 -99.99 - 1974 5 3.19 1.69 0.40 0.20 0.30 0.00 0.10 2.19 7.48 4.98 2.49 5.38 1.00 0.00 0.00 1.10 6.38 3.49 2.59 0.40 3.39 4.59 4.09 0.10 0.10 0.10 6.78 0.40 0.10 0.00 0.50 - 1974 6 5.90 1.30 0.90 3.00 6.90 7.80 6.90 0.80 2.50 6.40 1.90 0.00 0.00 0.00 0.00 5.10 1.90 1.70 0.90 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.70 11.10 -99.99 - 1974 7 0.00 4.58 1.40 8.67 5.58 0.30 0.50 5.38 0.70 4.58 3.29 2.39 1.20 7.87 8.57 1.10 0.40 2.19 0.70 0.30 2.89 12.76 1.89 0.10 2.99 2.89 3.59 1.59 0.50 0.10 0.00 - 1974 8 3.10 0.70 0.10 0.00 3.70 1.40 2.40 4.80 9.71 4.40 13.01 1.80 0.00 14.11 1.10 2.30 1.00 0.40 0.40 3.80 1.10 6.01 4.40 10.71 2.70 1.90 0.90 1.70 0.00 0.00 1.70 - 1974 9 13.41 7.40 4.40 10.91 9.51 9.01 6.50 3.30 3.30 0.40 0.90 13.71 2.50 9.81 1.30 15.41 0.80 1.30 1.30 11.51 7.81 6.50 2.60 8.21 1.40 1.40 0.00 0.00 0.70 0.10 -99.99 - 1974 10 4.19 2.00 0.00 0.00 3.69 2.60 0.40 0.00 0.80 0.10 0.00 1.10 2.90 0.50 0.30 1.90 11.68 7.19 4.39 1.80 0.10 0.00 0.10 0.50 0.70 4.29 2.60 0.10 1.30 5.59 2.00 - 1974 11 2.50 6.20 2.60 0.10 5.70 0.10 10.90 14.60 10.80 22.30 8.10 6.10 20.10 10.60 1.10 4.30 0.90 1.80 0.10 0.00 4.60 2.90 6.70 14.70 2.80 9.20 6.50 1.60 5.30 4.20 -99.99 - 1974 12 3.09 1.50 7.69 7.09 3.69 5.59 10.78 4.79 16.17 7.79 5.69 1.10 6.49 5.19 13.48 16.87 6.59 1.80 15.47 6.79 8.49 3.89 3.99 3.69 20.57 12.48 7.09 13.38 1.30 4.19 8.79 - 1975 1 0.70 2.50 1.50 14.09 7.49 3.20 1.00 1.90 12.29 12.79 10.19 7.29 20.98 12.19 8.29 3.30 1.90 0.20 20.19 2.10 27.78 15.99 7.59 13.89 5.40 10.09 7.29 3.30 18.39 10.59 5.60 - 1975 2 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.10 0.20 3.81 8.32 0.50 4.61 10.82 3.81 0.10 10.12 5.31 0.30 2.20 1.50 0.70 0.00 0.00 0.00 0.00 -99.99 -99.99 -99.99 - 1975 3 5.02 1.10 2.11 0.30 8.54 7.94 2.01 0.50 0.40 1.91 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.20 0.00 11.45 0.30 0.60 0.70 1.10 0.40 0.10 0.10 0.30 0.00 0.00 - 1975 4 2.29 0.20 1.10 0.50 0.00 3.19 0.30 0.20 1.10 3.89 4.19 1.60 3.39 2.99 1.50 9.77 1.40 0.60 3.99 5.78 6.08 0.00 0.10 0.10 0.00 0.40 0.70 6.98 4.59 7.98 -99.99 - 1975 5 2.50 0.00 0.00 0.00 0.00 0.00 0.00 3.20 4.10 4.00 2.00 5.50 4.20 0.40 0.00 0.00 0.00 0.00 0.30 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1975 6 7.00 2.00 1.30 20.40 3.30 0.00 0.00 0.00 0.00 0.00 0.20 0.10 0.60 0.50 4.20 5.00 3.80 8.00 3.10 0.10 0.00 0.00 0.30 0.00 0.10 0.00 0.00 0.00 0.00 0.00 -99.99 - 1975 7 0.00 0.00 0.70 0.00 0.00 0.00 0.00 0.20 1.99 6.28 1.99 1.99 8.97 11.96 10.07 0.00 1.00 0.00 4.09 0.20 14.75 10.57 3.49 1.30 2.69 1.79 0.00 1.20 6.98 0.40 0.00 - 1975 8 0.00 0.00 0.00 4.01 3.51 0.30 0.40 8.31 1.90 0.00 0.00 1.10 0.00 2.20 1.20 0.40 0.00 0.20 5.61 3.81 2.00 0.00 1.20 0.60 1.50 2.30 0.40 0.50 23.64 0.20 0.00 - 1975 9 0.00 2.40 0.20 1.40 4.70 4.40 1.40 10.69 8.50 1.40 6.00 0.70 0.30 0.10 1.20 13.79 24.89 0.50 4.80 9.99 1.40 10.39 4.60 27.39 10.99 0.70 5.90 9.29 3.00 13.49 -99.99 - 1975 10 6.30 15.00 10.50 16.80 0.70 0.00 0.00 0.50 15.40 0.20 0.00 0.00 0.00 2.50 1.30 0.00 0.00 0.00 0.00 0.00 0.00 9.90 10.50 1.70 0.30 0.90 0.10 0.00 3.30 1.50 5.50 - 1975 11 1.00 15.32 2.00 4.91 2.00 0.10 0.10 0.00 0.20 0.50 5.31 0.00 0.10 3.40 13.22 0.30 0.00 7.01 1.50 0.00 0.00 9.91 5.61 5.11 5.21 19.43 9.61 6.21 8.21 12.12 -99.99 - 1975 12 13.75 0.20 0.70 1.61 1.00 0.90 0.20 0.40 0.70 0.90 1.81 0.00 0.50 0.40 1.30 0.30 0.10 0.20 0.80 0.80 0.90 0.90 4.21 5.72 0.70 1.20 0.50 2.41 1.20 11.14 4.31 - 1976 1 8.69 21.69 3.50 5.50 10.39 8.69 3.60 0.50 7.40 9.39 1.90 4.40 6.00 1.60 1.40 1.10 3.60 10.89 19.99 10.79 9.99 4.30 0.80 0.10 0.40 2.10 1.60 6.30 0.10 0.00 0.00 - 1976 2 0.00 0.30 0.20 0.10 2.30 3.60 4.10 2.20 12.21 8.21 12.21 2.00 0.00 4.91 0.00 0.00 0.30 0.20 0.10 1.20 3.70 12.41 4.51 2.20 2.00 1.10 0.00 1.20 5.01 -99.99 -99.99 - 1976 3 0.10 0.00 0.00 1.10 0.00 0.00 0.00 0.00 9.41 15.01 8.21 1.90 0.00 0.80 1.40 7.21 0.10 0.00 0.20 19.42 8.31 0.40 1.30 9.71 3.90 9.71 0.10 12.11 4.10 7.91 5.90 - 1976 4 7.50 2.90 7.20 3.20 2.30 1.90 0.10 0.30 0.10 13.80 2.10 3.50 12.10 0.00 0.10 0.20 1.10 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 2.60 -99.99 - 1976 5 7.18 4.79 0.80 1.20 0.00 1.80 0.00 0.00 1.60 2.00 8.08 3.29 2.99 2.89 7.08 18.36 4.09 1.10 6.19 2.10 12.97 0.80 0.00 12.57 1.30 0.00 0.60 10.58 11.97 6.49 6.99 - 1976 6 3.79 1.00 0.20 2.00 0.30 0.00 0.00 0.00 4.69 3.39 9.89 0.30 0.90 1.30 1.40 11.18 5.09 3.30 4.49 0.40 3.89 2.80 4.99 0.80 0.60 0.00 0.00 0.00 0.40 0.00 -99.99 - 1976 7 0.00 0.00 0.99 0.30 0.00 0.00 0.00 0.00 10.52 1.09 0.50 4.47 4.66 4.86 12.11 1.09 0.10 15.88 0.50 1.09 0.10 0.79 0.69 0.20 0.00 0.20 0.00 0.10 0.69 2.98 1.39 - 1976 8 4.42 3.88 0.00 0.18 0.18 0.72 0.09 0.00 0.00 0.00 1.44 4.51 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.34 1.80 0.09 0.45 - 1976 9 0.00 0.00 0.00 0.00 0.30 0.90 0.60 15.64 2.91 7.32 0.60 1.90 3.41 0.20 0.00 0.10 0.00 0.00 3.01 4.21 22.36 6.82 0.10 0.00 16.04 0.10 13.93 14.84 1.40 3.41 -99.99 - 1976 10 1.80 6.88 8.79 0.00 14.93 1.59 2.86 0.53 0.85 10.91 14.72 2.33 4.98 20.76 2.22 0.64 8.90 0.53 1.48 7.63 1.38 14.72 11.54 0.53 4.87 0.00 0.32 0.64 1.17 0.00 7.31 - 1976 11 4.16 5.89 5.89 7.21 3.45 6.19 4.06 1.52 2.74 4.16 0.10 0.00 0.00 5.58 18.28 0.00 12.69 0.00 0.00 0.00 0.30 0.30 1.02 1.52 5.89 19.39 15.74 5.08 8.43 5.38 -99.99 - 1976 12 4.94 0.20 0.40 0.71 7.66 12.70 5.34 3.12 4.94 2.12 0.81 1.51 0.50 2.82 4.74 2.92 1.01 0.00 6.35 4.03 1.11 3.93 0.10 0.10 0.00 1.81 0.71 4.43 14.92 8.47 0.00 - 1977 1 0.00 0.60 4.50 11.90 0.80 1.60 0.70 5.30 1.70 0.10 0.00 0.00 2.90 4.10 0.10 0.00 0.30 7.80 14.00 6.50 5.60 2.20 0.50 3.30 16.70 0.20 0.00 0.00 3.60 4.70 1.50 - 1977 2 7.21 25.62 4.90 5.90 1.10 10.21 1.30 1.20 18.41 7.21 0.60 0.90 4.80 6.81 2.10 0.20 10.71 4.40 0.20 2.30 9.41 1.00 0.00 0.00 0.00 0.00 0.00 1.90 -99.99 -99.99 -99.99 - 1977 3 4.41 11.52 4.01 3.91 0.30 0.70 2.30 0.50 7.11 6.91 3.71 3.20 8.91 2.40 12.92 3.51 7.81 7.81 1.90 1.80 0.40 0.50 0.50 0.00 0.60 3.20 0.20 0.00 0.20 26.54 9.01 - 1977 4 6.23 4.09 0.19 0.39 0.49 0.19 0.10 0.58 1.27 0.58 3.89 4.28 3.41 0.00 0.97 1.75 0.00 0.00 3.70 7.59 19.67 10.03 4.87 4.38 3.60 4.19 15.19 4.38 4.58 0.49 -99.99 - 1977 5 1.10 1.00 9.60 0.20 0.70 10.80 4.80 0.30 1.30 3.20 8.80 11.30 0.70 0.10 0.70 0.40 0.10 0.00 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 - 1977 6 0.00 0.00 0.10 5.40 3.80 16.70 1.40 1.30 2.40 4.40 2.20 4.70 3.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.20 0.10 0.70 0.20 0.50 3.10 3.90 -99.99 - 1977 7 1.70 3.20 0.30 0.00 0.00 0.00 0.20 1.70 0.00 0.00 0.00 0.00 0.00 0.00 8.80 1.00 14.10 2.00 2.90 0.10 0.80 15.00 7.40 2.40 0.20 0.10 0.10 0.00 0.50 0.10 0.40 - 1977 8 5.60 6.20 9.30 19.80 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 4.30 7.20 3.60 0.00 0.20 0.10 1.60 5.60 0.10 0.60 0.90 21.80 3.60 4.40 0.10 1.60 4.50 8.10 3.40 - 1977 9 7.21 2.20 10.82 9.42 12.92 2.60 6.91 2.60 22.24 11.32 4.41 0.00 0.10 0.60 0.40 0.00 0.00 0.00 0.00 0.80 0.00 0.70 0.00 1.40 7.11 9.42 21.54 8.41 24.44 15.93 -99.99 - 1977 10 2.60 0.10 17.40 2.90 23.90 3.20 30.20 2.80 4.70 3.00 6.70 0.30 0.30 0.40 0.00 0.00 0.00 0.90 1.30 5.40 2.70 10.90 19.00 2.20 1.40 2.70 0.70 1.00 8.40 46.40 4.60 - 1977 11 6.00 8.80 11.00 9.80 12.10 24.50 10.50 9.40 17.50 5.10 9.30 5.60 4.40 6.70 4.00 2.80 0.50 4.00 7.80 0.40 0.10 5.20 9.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1977 12 0.20 0.00 0.00 0.00 0.20 0.20 1.50 0.80 9.40 7.60 12.00 0.90 3.70 4.30 0.20 0.20 2.80 1.10 0.00 0.00 2.00 20.00 11.90 4.40 3.10 8.10 1.80 1.90 1.20 0.30 3.70 - 1978 1 3.80 20.10 1.10 6.90 0.90 1.70 0.60 10.50 9.70 6.90 0.50 1.00 0.10 0.40 2.20 0.60 0.20 8.20 3.40 0.70 9.70 4.00 11.70 0.10 2.60 3.90 17.20 4.60 0.90 8.40 8.20 - 1978 2 9.60 0.32 8.64 6.30 4.48 0.96 0.00 0.43 0.00 0.21 1.28 0.00 0.53 0.00 0.00 0.00 0.00 0.00 0.11 1.39 0.85 18.03 6.62 6.62 11.10 3.41 4.16 0.85 -99.99 -99.99 -99.99 - 1978 3 6.53 3.37 0.59 0.00 0.00 10.00 17.32 0.00 3.46 0.40 4.85 5.44 13.66 9.60 4.06 0.79 0.10 1.19 17.42 3.96 5.74 12.47 7.42 11.09 11.78 12.47 1.88 8.41 8.12 0.40 2.97 - 1978 4 5.08 0.60 0.00 0.00 0.00 0.00 0.00 0.20 1.59 0.80 3.09 1.79 1.99 0.20 0.70 0.10 0.30 0.20 1.20 0.80 0.00 0.00 0.00 0.20 0.50 1.40 6.08 4.29 0.50 0.10 -99.99 - 1978 5 0.00 0.39 1.38 0.69 1.08 0.00 0.00 0.00 0.00 1.28 2.86 0.79 1.48 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.69 0.89 0.89 0.00 0.10 0.20 0.00 0.00 0.00 0.00 0.30 - 1978 6 0.30 0.00 0.00 8.40 1.10 3.40 0.70 1.00 0.40 0.00 0.00 0.00 0.00 0.70 1.40 0.00 0.00 0.30 0.10 4.60 6.90 13.90 1.50 1.30 0.10 0.20 5.50 0.70 3.00 2.90 -99.99 - 1978 7 5.79 2.20 8.99 3.00 0.00 0.60 1.40 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.10 0.50 0.70 1.40 6.79 15.08 2.70 1.80 17.48 1.50 7.99 0.00 0.00 2.00 0.90 - 1978 8 0.90 6.79 2.00 0.70 1.10 8.29 1.60 2.40 0.50 0.00 6.39 3.20 5.29 12.79 3.30 1.90 0.20 2.90 4.30 9.59 16.68 0.10 0.30 0.20 0.00 0.00 0.10 0.10 0.10 6.29 0.60 - 1978 9 0.00 2.85 0.57 0.19 7.98 1.80 1.52 8.36 35.62 11.78 0.19 16.43 9.69 8.64 1.42 1.52 0.76 0.57 0.38 0.09 7.22 2.28 6.55 6.08 15.86 3.32 14.82 25.36 5.41 1.90 -99.99 - 1978 10 5.81 3.31 2.71 4.41 2.81 0.10 1.00 2.91 2.01 2.01 1.10 0.20 0.00 5.51 6.62 3.01 0.40 0.20 2.71 0.90 3.11 0.10 12.13 1.00 8.72 0.00 0.00 0.20 1.20 2.71 3.01 - 1978 11 6.60 2.80 8.60 4.40 0.00 0.00 3.50 0.00 0.70 3.20 0.40 17.99 27.19 29.29 18.19 9.60 11.79 2.20 5.20 11.69 11.09 9.60 6.00 3.80 1.50 0.10 0.00 0.00 7.00 1.00 -99.99 - 1978 12 5.80 7.01 8.81 8.91 0.00 0.00 15.81 14.71 5.20 4.40 9.11 4.60 0.40 0.30 0.70 0.00 0.10 0.40 0.10 2.60 4.60 1.10 0.10 10.01 9.11 3.10 10.91 4.30 3.00 1.40 2.40 - 1979 1 9.30 3.49 2.71 1.55 5.14 8.14 6.10 6.98 10.85 1.07 2.03 1.07 1.26 8.04 4.55 0.58 0.19 0.19 11.82 5.33 0.00 0.00 1.55 2.23 1.65 1.16 1.55 4.36 0.48 5.04 4.07 - 1979 2 0.20 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.30 0.40 0.90 0.80 0.10 0.20 0.90 0.20 0.40 3.30 1.90 0.00 0.00 2.10 3.40 8.70 2.90 -99.99 -99.99 -99.99 - 1979 3 18.00 16.30 5.40 3.00 4.50 14.40 1.70 21.40 5.90 12.90 7.60 1.20 0.70 0.30 1.40 0.30 0.80 0.20 0.00 3.70 3.50 0.40 0.00 20.70 12.20 2.60 0.30 2.30 0.50 1.50 1.30 - 1979 4 3.51 0.30 1.40 0.90 0.00 1.60 2.01 0.50 5.21 12.83 6.62 8.52 0.20 3.11 0.00 0.00 0.00 1.00 3.61 2.81 1.50 12.83 4.51 0.80 0.10 0.00 0.80 0.60 2.61 0.30 -99.99 - 1979 5 0.39 0.39 0.30 0.69 1.38 1.67 0.39 0.30 0.00 6.88 1.08 1.38 0.10 4.43 1.57 8.26 4.92 0.59 0.49 1.28 2.56 2.46 2.75 1.38 1.28 1.87 4.13 7.08 1.57 3.74 0.10 - 1979 6 0.00 0.00 0.88 1.66 0.10 7.21 4.48 3.12 0.00 0.10 0.10 4.09 2.92 2.43 0.19 0.00 0.00 0.00 0.00 3.21 4.48 7.01 2.24 0.97 1.27 0.00 5.16 2.05 2.24 0.00 -99.99 - 1979 7 0.09 0.19 0.00 0.00 0.00 4.78 0.66 8.24 0.19 0.00 0.00 3.18 0.47 1.12 8.61 3.46 5.06 0.56 1.59 2.25 0.37 0.19 0.75 6.65 5.34 1.69 1.78 12.08 1.59 7.68 2.34 - 1979 8 0.68 0.19 0.19 4.07 24.12 20.83 0.39 6.20 0.00 1.45 3.49 17.05 14.53 3.88 5.04 8.82 1.45 0.29 1.84 8.14 5.43 9.30 1.26 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13.76 - 1979 9 12.83 5.23 0.86 0.38 0.86 0.76 3.23 3.33 0.76 4.28 3.04 2.85 2.38 0.29 0.00 7.98 14.35 0.48 9.60 2.38 0.57 10.36 1.14 15.21 7.51 3.14 1.14 0.00 0.00 0.19 -99.99 - 1979 10 0.90 1.50 17.79 8.49 0.00 3.20 3.60 5.60 0.40 4.50 7.80 0.00 12.79 6.30 1.00 0.80 11.29 17.19 2.70 1.20 0.00 0.80 0.00 1.70 6.50 0.60 3.70 2.40 11.29 25.48 7.20 - 1979 11 2.74 22.21 13.31 12.82 9.20 1.76 7.24 9.98 1.47 9.98 11.94 0.00 5.09 3.13 1.08 2.45 14.97 2.54 0.20 3.33 4.11 5.58 5.58 22.79 29.55 1.17 5.19 7.63 5.28 7.83 -99.99 - 1979 12 9.20 5.90 8.30 8.40 2.10 12.09 34.18 9.60 15.89 2.80 0.50 10.19 0.60 0.50 2.60 10.09 12.29 0.70 0.00 0.30 0.10 0.10 1.10 0.20 11.29 28.19 1.50 3.70 0.60 0.30 0.30 - 1980 1 0.00 7.20 22.80 6.00 0.90 0.10 0.00 0.40 4.10 2.00 0.40 0.10 5.00 0.00 0.00 0.00 0.00 0.00 4.40 2.90 8.50 1.20 1.00 0.10 0.00 0.00 8.10 2.70 10.20 3.30 1.00 - 1980 2 5.01 0.90 0.60 7.71 1.60 0.70 8.91 2.50 7.21 1.70 13.11 11.01 0.70 7.61 0.40 2.50 1.60 9.51 2.10 3.80 7.81 0.60 0.00 0.00 0.00 0.00 0.30 0.20 0.00 -99.99 -99.99 - 1980 3 0.20 0.20 0.00 3.89 2.50 6.99 1.00 1.30 4.49 2.79 8.28 0.30 0.60 0.00 0.00 12.48 3.89 0.40 0.20 5.49 3.59 1.10 5.99 9.88 4.19 5.69 2.20 2.59 0.00 3.59 5.99 - 1980 4 2.20 0.80 0.00 0.00 0.00 0.10 0.20 0.00 0.00 0.00 0.00 0.00 1.90 3.50 0.00 0.60 0.00 0.00 0.00 0.00 0.30 0.40 0.00 0.00 0.00 0.00 0.60 0.00 0.30 0.10 -99.99 - 1980 5 0.00 0.00 0.00 0.00 0.00 0.00 0.30 0.40 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.02 2.91 0.00 0.00 0.00 2.41 1.61 0.40 1.81 1.00 0.90 1.51 2.41 - 1980 6 0.10 8.03 0.80 7.22 10.43 1.91 9.23 0.70 0.80 0.80 2.01 0.10 0.60 23.47 2.31 5.12 0.10 7.42 11.03 2.21 2.71 5.22 6.42 2.71 5.12 0.00 4.21 1.50 1.10 3.61 -99.99 - 1980 7 0.00 0.80 10.21 4.30 0.50 0.10 0.10 1.20 0.00 0.50 1.70 1.60 0.30 1.30 0.20 0.90 14.41 4.90 4.90 0.00 7.21 17.01 4.60 1.10 12.21 17.41 0.10 0.10 0.00 15.81 0.10 - 1980 8 0.60 9.90 7.20 10.80 0.40 0.30 3.70 0.00 0.60 5.50 10.70 0.30 7.10 13.60 0.70 3.70 2.90 0.60 14.30 6.60 0.10 0.00 0.00 0.40 0.00 0.00 3.10 5.70 40.60 0.30 0.20 - 1980 9 2.20 3.60 2.50 8.99 6.69 9.29 4.00 2.90 6.49 8.29 24.37 7.39 13.88 4.40 5.69 4.50 4.79 7.09 0.70 0.10 0.70 5.29 4.50 1.80 8.69 20.38 0.30 2.40 4.10 5.09 -99.99 - 1980 10 0.50 1.60 16.78 1.40 10.09 28.57 9.09 3.70 1.20 0.20 0.00 0.30 1.80 0.40 0.80 2.10 0.10 5.89 0.10 8.09 14.48 14.68 28.67 5.79 3.00 7.49 1.20 6.39 0.00 0.70 0.00 - 1980 11 0.40 0.00 0.00 0.30 1.40 1.10 0.60 0.30 0.00 0.80 0.00 2.40 21.56 6.39 8.78 13.67 4.39 9.58 13.07 12.17 10.58 1.60 7.09 13.87 4.39 6.99 3.19 0.30 0.00 0.50 -99.99 - 1980 12 3.30 0.20 0.40 1.20 0.70 0.00 0.20 0.80 23.61 15.51 5.60 19.51 16.31 12.51 0.60 9.50 11.11 3.90 10.10 1.20 4.10 10.50 17.61 9.80 10.70 1.10 2.90 3.50 3.20 11.11 6.60 - 1981 1 18.20 17.90 2.90 0.10 6.00 2.50 1.00 6.20 1.10 0.30 5.80 0.30 17.60 5.00 0.90 17.20 0.70 10.30 2.10 5.10 1.70 2.30 2.10 2.00 5.50 0.60 0.00 1.90 1.20 0.00 0.10 - 1981 2 5.99 26.47 5.79 1.40 2.80 8.89 8.09 0.70 0.30 0.30 3.20 3.90 0.00 0.30 0.60 0.00 0.00 0.00 0.10 0.30 3.00 0.10 0.00 0.00 0.00 0.00 4.49 3.50 -99.99 -99.99 -99.99 - 1981 3 6.40 1.10 0.10 0.00 18.29 16.99 13.49 1.50 11.99 15.69 2.50 0.20 0.30 2.60 0.40 0.00 3.10 1.60 6.40 6.70 4.70 0.40 8.09 15.99 7.00 1.20 6.50 7.70 0.00 0.00 0.00 - 1981 4 0.00 0.00 0.00 0.00 0.00 0.40 1.20 0.00 0.00 2.80 2.20 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 6.00 7.20 0.40 0.00 1.50 0.70 3.80 0.10 -99.99 - 1981 5 0.00 9.20 6.20 0.80 4.70 5.40 9.30 0.00 3.60 1.10 0.00 3.90 0.80 0.60 1.70 4.70 0.90 7.20 0.10 7.50 0.80 6.20 3.80 0.50 0.00 1.30 5.60 0.20 1.10 1.20 3.80 - 1981 6 0.20 11.10 8.40 4.10 5.80 2.90 10.70 11.60 2.00 7.40 0.90 10.90 12.70 0.90 1.90 0.80 0.40 3.10 0.10 0.00 0.10 0.20 0.20 0.90 0.10 0.20 0.00 0.00 1.50 2.80 -99.99 - 1981 7 2.30 1.90 0.60 2.10 9.62 4.11 0.10 0.80 0.00 14.63 2.10 0.00 0.60 0.50 3.81 3.21 5.11 1.80 2.30 1.10 19.44 9.72 0.30 2.30 1.80 0.20 2.10 1.80 0.00 0.00 0.00 - 1981 8 0.00 2.81 6.92 0.00 0.10 0.00 0.20 1.60 0.00 0.90 1.20 1.20 1.20 0.10 0.60 0.30 3.71 3.11 9.62 0.40 0.40 0.00 0.50 3.21 0.10 0.00 0.00 0.00 0.40 0.00 0.00 - 1981 9 0.00 0.00 6.80 17.01 0.80 0.10 2.10 0.40 2.90 13.91 0.30 0.50 0.00 18.91 0.70 17.01 21.81 3.90 23.11 6.80 2.40 2.20 32.91 3.40 7.40 42.92 11.40 3.20 3.60 4.60 -99.99 - 1981 10 32.15 14.52 8.61 2.50 0.90 8.21 6.41 19.63 9.71 4.61 2.60 1.90 1.30 0.60 1.80 0.70 0.50 14.42 2.00 1.00 0.10 0.90 3.61 3.71 0.10 9.41 7.51 13.42 11.82 6.31 7.31 - 1981 11 18.30 6.70 8.50 0.00 0.00 0.00 0.00 1.30 7.70 10.60 0.80 1.00 1.00 0.80 9.10 6.20 5.20 8.30 15.10 4.10 12.60 13.50 3.90 3.30 10.80 14.70 6.80 1.10 12.90 3.00 -99.99 - 1981 12 0.10 0.60 7.69 0.70 1.00 0.60 0.10 0.00 0.10 0.00 0.10 0.40 8.19 2.00 0.00 0.00 0.00 0.00 9.98 11.38 0.80 0.00 0.10 0.70 0.90 0.00 2.10 0.60 1.90 6.09 0.20 - 1982 1 1.87 29.53 29.86 13.39 6.04 0.00 0.00 0.55 0.00 0.00 0.00 0.11 0.00 0.00 0.66 1.21 1.10 3.51 5.16 7.46 12.29 4.61 0.66 1.54 14.16 0.33 3.29 4.17 3.62 3.29 1.10 - 1982 2 6.01 0.00 0.20 7.91 6.11 4.70 6.81 15.81 8.61 4.30 2.60 8.31 2.90 0.00 0.00 0.00 0.30 1.00 0.00 0.10 2.70 0.10 0.20 8.41 8.81 1.70 2.50 12.91 -99.99 -99.99 -99.99 - 1982 3 8.40 16.30 5.50 0.00 8.80 4.90 5.90 4.00 24.90 6.80 20.60 4.30 5.20 6.90 10.60 5.80 0.20 0.00 3.70 4.80 2.30 0.00 0.00 0.00 0.00 0.00 0.00 0.90 0.00 0.00 0.60 - 1982 4 1.10 0.20 5.10 2.20 3.30 6.40 3.00 0.60 1.00 0.00 0.00 0.00 0.20 0.10 0.30 0.00 0.00 0.00 0.00 0.00 0.30 3.20 0.20 0.00 0.00 0.00 0.00 0.60 0.90 4.10 -99.99 - 1982 5 4.00 28.87 5.09 2.00 7.89 0.30 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 2.30 0.20 0.80 1.00 0.20 0.00 11.19 5.99 2.30 2.60 2.40 1.10 3.70 0.30 0.00 0.00 0.90 - 1982 6 0.00 0.00 0.20 0.00 1.60 7.51 0.00 0.00 3.10 2.70 20.72 0.30 0.50 0.00 14.92 0.00 0.00 0.00 1.20 0.60 0.00 1.00 0.00 2.50 12.11 4.00 8.41 4.10 0.40 6.61 -99.99 - 1982 7 0.80 2.80 1.20 4.90 3.60 1.10 0.50 2.70 0.90 0.60 0.20 0.70 0.20 9.90 8.40 0.90 0.10 0.00 0.00 0.00 0.00 0.00 0.00 2.20 0.00 0.00 0.00 0.00 0.00 0.00 7.10 - 1982 8 3.20 0.10 0.00 0.50 2.80 0.70 2.10 1.60 0.80 0.90 3.60 4.20 1.80 0.00 7.29 5.60 15.19 9.09 9.49 2.40 9.79 4.30 9.99 6.99 5.50 2.30 0.00 12.99 8.99 0.10 0.60 - 1982 9 0.10 0.30 2.10 21.71 6.00 8.00 3.80 1.10 6.90 8.70 3.80 1.20 0.10 0.00 0.90 0.10 0.20 1.80 9.81 9.81 0.80 5.50 3.20 23.31 8.20 11.91 25.51 9.61 2.40 9.10 -99.99 - 1982 10 21.08 0.19 3.63 6.49 1.43 0.67 0.95 0.00 1.53 2.48 1.34 12.12 4.96 0.00 4.39 8.01 17.36 6.77 16.12 4.10 1.72 1.53 3.34 1.91 3.72 3.43 0.00 0.19 14.79 23.75 6.11 - 1982 11 1.50 0.20 0.70 1.80 26.41 5.70 7.00 1.70 8.70 12.61 16.41 3.30 11.61 2.70 13.01 11.41 14.41 14.11 8.00 9.60 6.90 12.81 19.81 8.20 0.60 0.40 2.90 0.30 0.00 0.00 -99.99 - 1982 12 0.00 0.00 1.10 11.31 2.60 0.10 22.11 4.30 9.90 0.60 0.20 0.20 1.90 12.31 11.71 10.30 2.80 28.51 18.31 10.61 1.20 0.90 10.51 1.80 4.60 6.00 5.20 1.60 3.30 8.70 15.71 - 1983 1 11.57 15.50 8.99 11.26 14.88 6.51 5.99 9.40 2.58 3.82 15.29 7.03 10.54 3.82 1.34 4.44 9.20 2.38 0.93 3.31 1.24 0.10 10.44 0.72 1.65 9.61 8.89 9.51 6.51 6.10 22.32 - 1983 2 0.20 3.11 0.00 17.74 2.81 0.00 1.20 0.50 3.81 0.00 0.30 0.30 0.10 0.30 0.00 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.50 8.92 6.81 0.70 -99.99 -99.99 -99.99 - 1983 3 4.81 17.03 1.50 1.20 5.31 0.90 0.00 0.00 0.10 1.60 1.60 5.11 7.61 1.00 8.22 6.01 4.91 17.53 9.62 11.72 9.62 6.01 4.51 3.91 0.30 4.91 0.10 3.71 9.62 6.61 1.90 - 1983 4 0.80 0.10 5.10 3.20 0.90 0.20 1.50 0.30 0.30 0.40 0.20 1.90 0.80 0.10 4.00 8.60 1.40 1.90 1.70 2.10 0.00 5.50 3.60 1.60 0.10 0.50 0.90 1.80 0.30 0.00 -99.99 - 1983 5 0.80 1.20 0.00 0.00 21.72 4.50 2.10 6.81 3.00 9.21 5.61 9.71 7.91 2.40 2.20 2.30 1.90 0.50 2.70 5.51 2.40 0.80 0.00 0.80 0.10 0.00 2.80 0.80 0.70 2.40 0.70 - 1983 6 18.20 1.40 9.70 5.20 0.10 0.00 0.80 4.80 0.20 10.10 0.60 1.60 9.30 3.00 2.50 3.40 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.40 0.40 4.70 0.00 0.00 -99.99 - 1983 7 15.90 2.10 0.00 1.10 0.20 1.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.10 0.00 0.20 0.00 0.50 0.00 0.00 0.00 0.60 4.40 5.60 0.10 0.10 0.30 0.20 2.30 0.20 - 1983 8 0.90 0.80 6.77 0.00 0.10 0.10 0.00 0.00 0.00 0.00 0.20 0.00 0.00 0.10 2.79 0.10 7.36 0.10 0.00 1.59 1.29 4.38 9.45 0.00 0.00 0.20 0.00 0.00 0.00 0.00 3.58 - 1983 9 3.00 18.00 1.20 9.00 5.50 0.30 4.10 12.90 11.50 3.00 0.00 0.00 3.50 11.90 11.10 2.60 8.70 10.50 13.60 5.40 1.90 11.90 0.60 0.10 0.10 0.90 0.30 0.90 5.10 2.10 -99.99 - 1983 10 6.01 5.31 16.52 17.02 8.01 8.41 13.32 10.11 13.02 13.92 21.23 8.21 4.91 14.32 20.12 11.41 12.21 13.02 1.10 0.00 0.00 3.50 3.60 1.10 4.00 2.90 3.20 0.20 4.41 1.40 4.71 - 1983 11 1.92 2.12 0.30 0.10 0.30 0.61 0.51 0.30 0.00 0.20 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.71 2.43 0.00 0.00 1.52 0.51 6.16 7.38 6.97 3.84 1.62 0.00 0.71 -99.99 - 1983 12 0.70 0.90 1.80 6.60 0.20 0.10 6.60 6.10 0.30 0.00 5.70 14.81 19.91 1.40 0.00 3.30 8.40 0.70 2.10 5.10 0.30 6.80 7.80 14.31 6.20 10.91 14.11 2.50 4.30 10.61 15.61 - 1984 1 9.95 15.81 1.68 8.27 3.56 6.91 1.89 0.10 3.56 11.10 11.83 28.69 14.77 11.94 9.22 20.11 9.84 2.30 0.00 0.00 13.30 7.23 15.81 4.40 3.04 2.20 0.94 7.23 1.99 5.55 1.68 - 1984 2 14.62 7.51 12.92 20.43 14.62 11.82 13.42 0.30 0.70 1.60 0.90 0.00 0.60 0.10 0.00 4.21 1.40 0.20 0.30 2.90 5.41 0.80 3.61 1.40 0.50 0.00 0.00 1.20 0.90 -99.99 -99.99 - 1984 3 10.98 0.10 8.29 0.60 0.20 0.00 0.10 1.00 0.00 1.20 6.09 1.00 0.30 0.10 0.00 0.00 0.00 0.00 0.00 0.10 0.70 0.10 10.98 12.18 3.69 2.00 2.40 1.20 6.59 0.20 0.40 - 1984 4 0.00 0.00 0.00 0.30 0.00 0.40 0.00 0.00 1.00 7.30 1.90 0.60 0.40 1.60 1.50 0.10 10.10 12.40 1.90 4.20 1.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1984 5 0.00 0.00 0.00 0.35 0.00 0.00 0.12 0.00 0.00 0.24 0.00 0.00 0.00 1.54 0.12 1.54 0.47 0.00 0.12 0.59 2.25 0.24 0.24 2.25 3.07 1.66 0.00 0.00 0.71 0.71 2.60 - 1984 6 7.81 0.00 6.81 2.20 5.01 0.70 0.00 0.00 0.00 0.00 1.10 4.11 2.30 0.10 0.00 2.91 0.10 0.10 0.60 1.50 11.82 0.50 2.80 3.01 0.80 1.90 0.10 0.00 0.00 0.00 -99.99 - 1984 7 0.10 0.00 0.00 0.00 0.00 1.10 0.00 0.00 0.30 2.81 3.51 0.50 0.70 0.70 0.00 0.60 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.14 4.41 0.20 4.81 1.20 - 1984 8 2.20 5.71 0.10 0.00 2.20 4.81 0.00 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 1.90 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.30 0.30 12.33 1.00 9.02 6.01 - 1984 9 24.93 6.11 3.81 0.10 0.00 0.00 0.10 9.71 1.80 1.80 0.70 4.81 12.92 0.50 1.00 3.30 0.20 3.91 3.00 9.41 7.51 6.21 1.50 0.10 0.00 6.01 13.52 5.21 12.12 6.11 -99.99 - 1984 10 1.20 3.40 11.29 0.20 0.70 1.30 6.39 2.10 1.00 5.60 0.40 12.19 5.89 1.40 0.00 3.60 23.58 18.68 12.49 6.29 26.68 2.50 4.40 35.27 2.40 1.40 6.09 12.59 12.69 5.79 1.70 - 1984 11 12.80 2.80 23.30 0.00 0.00 2.60 2.90 14.50 16.90 2.00 13.80 0.70 3.50 1.40 1.70 1.40 1.40 0.70 0.00 1.00 16.20 4.20 9.50 11.10 4.20 13.30 24.50 2.10 2.70 5.10 -99.99 - 1984 12 7.31 2.00 4.11 9.81 6.51 10.01 8.71 5.21 0.90 0.20 0.00 0.00 4.21 1.80 0.20 7.71 7.61 8.21 19.52 7.31 4.81 10.81 8.41 5.81 4.71 0.00 0.40 5.91 4.31 1.90 0.00 - 1985 1 0.20 0.00 0.00 0.50 0.00 0.10 0.20 0.40 0.00 0.00 0.00 0.00 0.60 0.60 0.50 4.79 2.80 0.00 0.20 1.50 16.27 0.80 0.50 1.30 0.10 0.10 8.19 3.59 0.30 11.58 6.19 - 1985 2 3.61 1.70 0.60 0.00 0.00 0.40 2.01 0.00 0.00 0.00 0.00 0.00 0.10 0.10 0.00 0.30 0.80 6.92 6.12 0.10 0.10 14.04 2.51 0.00 0.20 0.00 0.00 0.00 -99.99 -99.99 -99.99 - 1985 3 0.40 0.40 11.39 1.00 0.20 2.00 0.30 0.70 2.50 0.20 0.70 1.60 1.20 1.70 1.40 0.80 0.50 0.00 0.00 0.00 0.10 0.80 3.60 6.49 0.50 0.50 0.70 4.20 21.38 9.99 13.38 - 1985 4 14.44 5.97 5.28 7.37 3.48 1.59 1.49 1.00 0.40 8.66 2.09 6.67 4.08 1.79 3.98 1.49 0.00 1.99 0.40 0.10 0.10 0.00 0.00 0.30 0.40 1.79 0.10 9.66 1.39 2.89 -99.99 - 1985 5 0.30 0.00 0.00 0.00 2.20 0.90 0.00 0.10 0.60 0.30 0.00 0.00 0.00 7.20 2.70 0.20 0.90 5.50 0.10 0.00 0.00 0.00 15.10 4.50 11.60 0.90 1.50 0.60 0.00 0.00 0.00 - 1985 6 0.00 0.00 0.00 0.00 0.40 1.90 2.30 6.30 0.60 2.50 7.40 1.30 2.10 0.40 0.00 0.00 4.70 0.40 1.70 2.20 4.40 4.40 12.00 1.10 4.50 1.20 2.80 1.20 0.10 0.50 -99.99 - 1985 7 0.19 0.10 0.78 9.32 5.53 0.39 5.24 2.91 0.00 3.88 15.05 6.51 4.27 0.10 13.79 7.48 20.00 4.85 3.98 4.85 10.00 12.23 10.49 2.04 9.22 20.78 1.75 14.08 0.58 0.00 0.00 - 1985 8 22.50 8.20 6.80 0.80 0.20 1.20 2.90 6.70 2.30 2.10 12.30 6.30 4.80 24.20 23.30 4.20 2.60 7.10 6.20 11.80 10.40 1.90 23.40 6.60 0.70 14.00 17.90 1.60 3.10 3.70 17.00 - 1985 9 0.20 18.00 0.10 5.60 1.30 5.10 18.10 1.10 0.90 0.00 0.00 6.50 10.40 16.20 5.40 7.40 5.00 36.60 1.40 41.30 27.90 23.10 2.60 0.10 5.20 1.40 0.00 0.00 1.00 30.60 -99.99 - 1985 10 4.09 12.77 11.50 8.77 12.87 3.12 5.95 4.48 2.92 6.04 0.19 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.10 0.10 0.00 0.68 0.00 - 1985 11 1.30 1.00 1.20 16.94 2.61 11.33 5.71 11.33 6.72 0.70 0.10 0.00 2.31 7.42 10.33 1.70 0.00 0.00 0.10 0.30 0.40 0.20 0.10 0.30 0.00 0.00 1.80 1.80 0.00 32.18 -99.99 - 1985 12 8.69 10.39 4.30 1.80 3.90 10.49 17.48 3.30 0.40 2.90 2.50 11.79 0.30 3.00 2.50 4.70 8.19 6.89 14.98 31.27 4.20 4.20 3.80 0.40 1.70 0.00 0.10 0.30 0.80 12.09 12.19 - 1986 1 1.00 0.50 0.10 8.20 0.40 0.00 1.10 4.20 19.30 8.20 7.20 11.90 16.10 5.80 0.00 0.00 11.30 11.60 8.20 12.10 10.50 11.60 1.30 0.00 0.50 3.20 2.10 3.30 3.30 0.90 0.40 - 1986 2 0.18 0.54 0.54 0.00 1.61 1.52 0.27 0.00 0.00 0.00 0.00 0.00 0.00 0.27 0.09 0.45 0.00 0.09 0.09 1.34 2.96 0.18 0.00 0.00 0.00 0.00 0.18 0.00 -99.99 -99.99 -99.99 - 1986 3 0.00 0.00 11.63 14.56 5.98 1.41 0.11 2.93 3.69 0.54 5.22 0.11 6.52 4.35 5.87 3.48 4.24 5.65 13.15 10.76 15.76 18.58 6.09 2.72 1.96 7.71 3.15 3.15 5.54 1.85 1.41 - 1986 4 1.15 1.25 0.10 0.31 0.00 0.42 0.21 0.00 0.42 0.00 0.10 0.83 2.61 2.61 6.36 1.67 0.10 0.83 17.62 0.83 1.15 4.17 2.19 0.10 0.42 1.98 4.28 3.75 17.62 2.40 -99.99 - 1986 5 0.00 1.20 4.60 2.20 9.70 10.99 14.59 1.00 17.39 9.50 6.50 11.99 5.10 3.30 1.10 0.40 19.19 1.00 1.00 8.40 7.50 5.10 2.30 9.99 11.09 13.99 4.70 1.50 0.10 6.00 4.10 - 1986 6 0.70 1.90 0.60 0.80 0.00 0.00 0.10 16.40 5.00 12.30 0.00 11.10 0.50 0.00 0.00 7.60 8.30 0.00 0.00 0.00 0.00 0.80 0.40 1.50 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1986 7 0.31 0.62 3.59 1.64 0.62 0.31 2.97 0.00 0.31 0.10 0.00 0.00 0.51 3.59 1.23 1.74 0.72 0.92 4.00 0.41 0.82 0.10 0.00 5.44 1.13 6.87 7.90 17.23 0.51 19.49 0.82 - 1986 8 15.37 4.63 2.07 3.35 13.10 13.00 1.67 0.00 0.00 0.00 0.00 3.64 21.87 12.41 6.11 0.30 0.99 2.56 0.89 0.00 3.25 0.00 0.00 0.00 5.71 0.89 0.49 0.00 0.10 0.00 6.70 - 1986 9 0.20 17.20 0.00 0.90 6.20 0.10 0.00 0.00 0.00 0.00 0.60 0.00 0.10 0.10 0.00 0.20 0.10 0.00 0.50 1.30 6.60 0.50 0.00 0.00 1.50 1.70 4.20 3.10 0.20 0.60 -99.99 - 1986 10 3.86 0.00 0.29 0.87 1.55 4.83 0.19 6.18 0.68 0.58 0.00 0.00 0.77 0.77 0.10 0.00 3.38 10.14 13.42 15.16 12.65 12.26 2.32 20.95 0.77 23.27 6.95 9.85 15.16 6.86 0.39 - 1986 11 0.00 6.54 1.44 18.86 1.06 13.57 13.09 12.22 14.24 2.60 2.79 1.83 6.35 11.26 20.21 4.43 8.47 3.08 3.75 3.95 9.43 17.51 4.04 39.93 3.75 4.62 0.38 1.44 1.35 4.91 -99.99 - 1986 12 6.80 26.81 10.80 24.61 3.00 5.90 10.40 14.51 2.00 15.31 2.40 10.40 1.80 16.11 4.40 10.20 15.51 8.50 7.10 0.00 0.00 0.00 2.00 8.70 2.30 5.10 6.10 16.21 13.00 21.91 6.90 - 1987 1 12.88 0.00 11.78 7.89 2.90 0.00 0.00 0.00 0.00 0.30 0.50 4.39 1.40 0.70 0.30 0.00 0.80 9.59 16.38 1.30 0.80 0.00 0.00 0.00 0.10 0.10 0.40 0.00 0.00 0.00 0.00 - 1987 2 4.30 3.30 0.00 10.70 9.10 1.00 6.80 0.40 20.60 1.30 0.70 0.00 0.00 0.00 0.00 0.00 0.00 1.60 0.30 0.00 0.00 0.40 0.00 0.00 1.80 12.80 5.80 4.20 -99.99 -99.99 -99.99 - 1987 3 22.20 0.00 5.20 3.10 12.00 5.10 6.40 0.00 0.00 0.00 0.00 0.00 3.90 1.90 0.40 10.30 3.10 1.10 0.50 0.60 1.90 0.90 0.00 13.60 3.50 25.90 12.80 0.20 1.10 3.60 5.40 - 1987 4 5.26 0.10 0.31 0.10 4.23 0.72 5.57 4.13 3.20 13.83 0.10 5.16 0.21 0.10 0.10 0.00 0.10 2.27 9.39 1.45 0.00 0.52 0.00 0.00 0.00 0.10 0.00 0.00 0.21 6.81 -99.99 - 1987 5 7.92 1.10 0.00 0.00 0.00 0.00 0.00 0.00 0.20 6.32 4.81 1.10 6.02 1.60 0.00 5.52 1.81 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 22.37 1.60 4.31 - 1987 6 0.10 8.19 1.80 0.20 27.47 11.29 1.30 0.40 0.10 3.10 3.70 2.50 1.30 5.10 1.30 0.10 0.00 0.00 0.00 0.10 13.99 3.40 3.50 0.00 0.20 0.30 11.99 1.00 0.50 3.40 -99.99 - 1987 7 0.30 0.00 0.70 0.30 0.60 0.00 0.40 0.10 17.42 22.52 1.40 0.80 0.00 2.50 7.41 3.00 3.00 3.70 1.70 0.10 0.00 0.00 0.90 0.10 4.50 11.61 0.70 2.50 1.20 3.60 2.10 - 1987 8 0.19 0.48 1.16 0.10 0.19 0.77 0.10 0.10 0.00 0.29 5.50 17.95 2.22 1.25 26.92 12.26 3.96 0.00 8.49 22.87 0.00 0.77 1.16 0.00 1.93 0.97 1.16 3.86 0.29 0.00 11.97 - 1987 9 0.09 0.74 2.23 10.57 2.97 5.66 2.78 4.17 16.69 5.38 15.39 4.36 2.41 6.86 1.48 0.28 1.76 0.00 8.53 10.75 15.02 4.82 3.80 2.23 0.93 0.74 0.09 0.00 0.00 0.28 -99.99 - 1987 10 0.00 0.00 0.00 1.07 15.34 3.91 16.60 3.61 2.34 0.10 5.47 3.42 3.42 1.76 11.14 12.99 15.43 4.69 13.28 7.23 14.46 2.25 0.10 2.15 5.08 7.23 4.49 0.00 0.00 1.95 0.20 - 1987 11 0.28 0.00 0.00 0.00 0.00 0.00 1.86 3.35 4.65 9.49 10.42 13.30 0.74 10.42 10.98 4.74 3.07 11.16 4.84 1.12 3.16 1.02 0.65 0.37 0.19 0.37 0.09 2.33 1.40 0.00 -99.99 - 1987 12 0.00 0.00 0.00 0.00 0.00 0.28 0.00 0.00 0.00 0.00 0.09 0.00 0.00 0.00 4.38 4.29 5.13 4.29 20.61 10.63 3.08 0.09 0.75 4.20 15.01 5.03 23.03 16.32 4.38 12.96 8.86 - 1988 1 17.48 7.26 6.11 3.26 4.11 0.84 4.42 7.79 6.74 7.90 14.63 10.74 0.84 4.53 1.26 4.00 2.74 19.05 5.05 4.84 6.21 4.32 15.90 2.74 6.32 0.21 0.00 3.79 1.89 10.95 17.69 - 1988 2 19.32 16.68 4.37 0.71 2.95 2.03 14.44 10.48 17.49 2.95 2.54 5.80 5.59 8.44 14.75 0.61 2.03 0.61 1.22 0.71 0.51 0.61 0.00 0.10 0.81 0.81 0.10 0.10 0.00 -99.99 -99.99 - 1988 3 0.00 7.24 0.00 0.00 2.35 0.61 0.92 1.53 2.35 2.65 5.92 5.10 0.10 7.45 25.71 0.10 0.00 26.02 0.20 1.94 0.41 11.22 10.41 10.10 11.84 0.82 12.04 5.61 6.94 2.55 3.47 - 1988 4 4.94 0.20 0.00 0.00 0.00 0.00 2.17 2.07 1.78 3.75 1.68 0.49 0.00 4.05 11.07 5.34 10.08 22.53 0.30 4.74 0.59 0.20 0.00 0.20 1.38 0.30 0.00 0.00 0.00 4.94 -99.99 - 1988 5 3.73 2.07 6.22 0.83 0.31 0.00 0.62 1.45 0.00 0.00 0.62 1.24 0.31 0.00 0.00 0.00 0.31 0.83 0.21 0.10 0.00 0.00 10.88 7.87 0.00 5.08 0.00 0.10 12.95 1.04 1.04 - 1988 6 1.31 4.85 3.34 0.20 0.30 9.50 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.20 0.00 0.10 0.30 1.21 1.31 0.00 0.00 0.20 3.03 0.00 0.00 0.00 0.20 2.22 -99.99 - 1988 7 3.95 5.82 0.83 2.91 3.12 4.15 6.75 4.57 24.30 5.82 2.80 11.32 4.36 0.83 1.45 13.09 1.14 4.47 0.52 4.78 8.62 12.77 11.11 10.39 11.11 5.19 4.67 11.53 6.75 3.12 1.87 - 1988 8 0.95 1.69 0.53 0.63 4.97 1.06 0.00 7.93 1.06 6.35 18.30 4.34 23.70 5.40 0.21 0.00 21.79 9.42 10.16 0.53 0.00 0.63 4.02 7.62 2.01 11.00 1.38 9.10 6.03 9.10 7.19 - 1988 9 13.81 12.25 2.49 17.44 0.00 19.21 5.50 1.56 1.14 1.87 10.07 0.93 0.21 0.00 0.00 0.00 0.21 0.93 0.10 0.42 3.53 19.93 9.55 0.52 15.47 3.84 6.96 3.53 0.52 0.21 -99.99 - 1988 10 9.67 6.56 6.45 7.42 11.61 22.25 4.19 9.24 4.19 0.00 1.83 3.01 1.40 0.00 0.00 0.00 0.00 25.36 13.65 5.16 4.41 2.04 7.95 7.20 27.51 0.54 1.07 0.11 0.00 0.00 0.00 - 1988 11 0.00 0.00 0.00 3.76 0.00 0.42 0.10 14.43 12.13 5.33 0.84 4.50 1.57 0.10 0.00 1.88 6.06 0.10 9.83 0.10 0.31 0.10 1.67 1.36 0.00 0.00 11.40 0.21 20.39 2.20 -99.99 - 1988 12 0.00 0.73 24.23 6.06 3.45 0.31 2.82 6.27 5.33 1.36 0.00 0.31 0.10 0.21 1.88 0.63 3.55 19.42 3.55 3.03 6.16 9.71 5.01 4.70 15.35 8.77 2.09 0.42 0.10 0.84 0.21 - 1989 1 0.30 0.10 7.96 8.36 13.70 0.91 2.02 11.89 2.22 5.04 24.08 4.84 15.42 4.63 0.60 2.42 1.31 0.00 0.30 11.59 2.32 6.05 0.71 3.73 5.84 5.64 17.13 0.20 0.30 0.30 0.00 - 1989 2 2.19 11.85 14.48 15.36 2.63 9.76 2.52 0.22 6.69 3.29 18.32 8.23 9.00 16.02 2.30 1.43 6.69 11.41 6.69 9.10 5.81 5.70 2.63 8.78 1.21 2.19 5.05 5.05 -99.99 -99.99 -99.99 - 1989 3 0.50 3.23 1.41 2.42 3.13 4.13 0.20 13.91 20.77 0.91 1.31 11.80 6.75 6.55 1.31 0.10 7.86 11.90 13.61 7.26 13.31 9.48 17.14 8.57 5.44 1.92 6.55 0.50 14.42 0.40 0.40 - 1989 4 0.60 0.90 0.40 2.31 4.31 6.92 3.81 1.60 5.72 3.81 13.44 1.30 9.93 1.70 0.20 0.40 0.00 0.00 0.10 0.10 3.41 0.50 0.20 0.30 0.60 2.01 0.10 1.60 0.30 5.31 -99.99 - 1989 5 0.70 4.39 0.50 0.00 0.00 0.00 0.00 0.30 0.30 3.49 8.28 0.30 0.00 0.10 5.39 0.40 2.69 3.09 0.10 0.00 0.00 0.00 0.40 5.69 0.00 0.00 0.00 0.00 0.10 0.40 2.49 - 1989 6 0.30 1.41 0.00 0.40 2.21 1.81 0.70 0.10 0.70 1.71 1.61 20.83 5.23 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 1.11 18.62 0.91 4.63 6.54 0.91 10.06 -99.99 - 1989 7 0.20 0.00 0.00 0.00 0.00 0.40 0.00 0.00 1.89 2.28 0.70 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.20 0.30 0.10 0.00 0.00 12.61 4.47 9.73 8.54 1.79 0.20 0.20 - 1989 8 0.88 0.88 1.07 1.46 6.54 1.07 2.83 9.18 5.08 17.67 2.25 17.47 7.61 17.18 4.39 7.03 8.20 1.76 18.94 13.86 1.76 0.49 7.71 7.22 5.76 2.73 0.00 3.42 3.32 14.25 0.98 - 1989 9 1.82 0.00 0.61 0.30 0.30 0.81 1.72 0.00 0.00 0.00 0.00 1.42 4.45 0.61 12.84 0.20 5.97 6.17 10.82 19.61 4.45 6.88 0.00 0.00 1.42 1.92 0.00 0.00 0.00 0.00 -99.99 - 1989 10 0.00 0.00 0.00 8.74 5.43 2.51 1.11 0.20 0.70 1.11 2.21 4.72 8.54 1.00 11.36 4.42 10.45 3.12 10.15 10.15 2.11 1.11 5.23 15.48 9.35 3.72 18.09 4.12 5.93 8.74 4.32 - 1989 11 6.16 5.55 8.18 5.65 0.30 3.03 2.93 1.11 5.15 7.57 0.91 3.84 0.10 0.00 0.00 0.00 1.11 0.71 0.00 0.00 0.00 0.00 0.40 0.91 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1989 12 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.10 2.91 43.29 5.11 0.10 1.40 4.71 4.61 6.31 10.02 13.53 1.80 0.00 0.00 0.00 0.00 0.00 3.51 - 1990 1 3.20 5.90 6.40 3.90 5.50 4.50 5.40 3.30 17.50 2.30 3.20 1.80 4.80 17.90 10.00 18.40 13.20 9.90 7.20 1.10 8.50 15.90 10.50 14.30 19.50 10.90 0.20 6.40 8.30 10.40 7.50 - 1990 2 10.64 8.15 8.55 7.95 9.54 16.80 6.76 5.87 2.49 11.63 18.29 5.07 12.03 5.77 0.99 10.34 12.53 9.35 4.67 3.78 1.69 0.20 18.39 24.46 22.07 8.05 5.47 7.06 -99.99 -99.99 -99.99 - 1990 3 2.50 1.00 6.41 5.31 19.22 14.02 8.51 19.12 20.52 10.11 1.90 3.80 5.41 11.51 3.60 1.20 0.70 6.21 1.50 7.81 7.41 3.10 5.61 5.01 0.10 0.00 0.70 1.10 0.10 0.00 0.00 - 1990 4 12.61 2.20 0.30 0.70 2.40 0.90 0.00 1.40 5.41 1.90 1.00 8.01 2.30 7.71 7.11 7.91 5.71 9.01 2.40 0.10 0.00 0.00 0.00 0.00 4.81 0.10 3.10 0.70 0.00 0.00 -99.99 - 1990 5 0.00 0.00 0.00 0.00 0.80 3.90 4.60 4.90 5.40 2.30 0.00 0.00 0.00 6.90 9.00 12.00 0.00 0.00 0.00 0.20 0.30 1.10 0.10 0.20 0.00 0.00 0.00 1.00 1.60 0.00 8.20 - 1990 6 3.71 1.30 4.21 4.21 5.31 24.74 6.61 2.40 0.70 0.90 0.00 0.00 0.00 0.00 0.00 0.10 1.00 6.01 0.80 7.21 3.31 1.90 0.60 8.41 0.20 17.03 1.40 1.90 5.31 11.82 -99.99 - 1990 7 1.20 0.80 5.79 14.38 0.00 8.39 10.09 6.49 1.80 2.30 0.10 2.80 0.00 0.30 5.99 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.69 0.90 3.59 0.40 0.00 - 1990 8 0.00 0.00 0.49 1.86 0.10 2.35 0.59 10.79 3.83 2.94 7.26 3.83 1.67 12.95 24.91 1.28 0.00 1.28 2.75 0.10 3.14 0.69 4.12 1.57 2.94 2.94 3.04 6.67 5.98 5.39 1.96 - 1990 9 3.91 5.11 1.40 1.30 8.22 9.82 0.40 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.51 15.63 16.73 1.60 8.02 3.91 0.40 1.40 1.90 0.00 0.00 0.30 10.12 4.91 0.30 -99.99 - 1990 10 7.70 33.01 8.80 11.60 31.21 17.31 0.20 2.20 8.00 7.80 7.20 0.50 1.70 5.40 15.91 3.20 16.91 0.10 0.10 0.10 0.00 0.00 0.10 2.60 4.70 2.70 11.00 13.81 15.31 10.10 0.70 - 1990 11 0.30 0.10 0.00 0.00 0.00 0.10 0.10 0.00 1.30 1.60 3.20 5.91 2.10 4.51 7.91 6.01 7.71 6.01 2.10 0.70 0.00 1.40 10.52 1.00 2.90 0.00 0.00 0.00 0.00 0.10 -99.99 - 1990 12 0.00 0.85 0.11 0.11 0.21 13.42 3.06 5.92 0.63 1.27 5.60 0.00 0.00 0.00 0.11 0.21 0.00 0.85 12.68 2.11 12.57 26.73 6.02 19.12 20.71 16.90 5.49 16.06 4.75 2.32 3.70 - 1991 1 31.13 6.12 8.16 16.74 14.49 8.05 9.55 5.15 10.09 5.69 4.51 0.00 0.21 0.00 0.00 0.75 0.97 8.48 5.37 3.54 0.32 1.29 1.61 0.00 0.11 0.64 0.00 2.15 3.65 0.32 0.11 - 1991 2 0.10 0.00 2.45 0.00 0.00 0.41 0.92 1.33 1.53 0.10 4.29 0.00 0.00 10.94 0.00 0.00 0.20 0.00 6.85 2.56 12.17 18.61 15.74 0.92 2.35 3.88 5.73 0.41 -99.99 -99.99 -99.99 - 1991 3 0.10 4.34 1.09 14.31 0.49 0.69 1.28 6.71 2.27 0.49 1.18 3.95 2.86 1.38 7.60 7.89 1.97 41.24 4.74 7.40 0.30 0.89 0.00 0.00 0.00 0.00 0.00 0.00 0.30 1.58 10.06 - 1991 4 27.48 3.33 8.29 6.26 3.13 13.44 5.46 0.71 20.71 6.37 12.23 16.07 0.00 0.00 0.00 0.00 0.61 0.30 0.40 2.22 0.51 0.30 1.21 1.01 0.00 0.00 0.00 1.52 4.65 0.00 -99.99 - 1991 5 0.00 0.30 1.70 0.00 0.10 1.70 0.00 0.30 0.00 1.00 0.60 6.80 2.00 0.00 0.00 0.30 0.10 1.70 1.60 1.10 0.60 0.10 0.80 0.00 0.40 0.10 0.00 0.00 0.00 0.00 0.00 - 1991 6 11.28 2.26 0.69 0.00 0.00 0.00 0.00 11.38 11.08 0.88 9.22 12.65 5.20 0.59 4.32 0.78 1.27 1.77 0.98 0.98 3.43 2.94 1.86 5.20 3.04 0.39 2.35 0.10 9.42 2.84 -99.99 - 1991 7 9.35 0.00 0.00 0.00 0.00 0.00 6.64 13.68 0.30 0.40 7.84 8.55 1.91 6.74 9.35 1.21 3.32 2.41 1.51 1.11 0.10 5.53 3.62 0.91 0.10 0.30 1.31 1.41 0.00 0.00 2.01 - 1991 8 0.90 0.50 1.11 5.73 2.41 0.70 0.20 20.30 1.61 0.80 2.01 0.50 0.80 0.80 1.11 6.23 1.31 5.02 0.20 0.10 3.01 2.71 2.21 0.10 0.00 0.00 0.20 0.00 0.00 0.00 0.00 - 1991 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.04 5.11 4.07 8.55 4.17 3.65 2.40 4.69 0.63 25.65 12.72 22.10 8.34 0.31 0.00 0.73 0.21 1.67 11.68 -99.99 - 1991 10 6.07 10.34 5.27 15.72 3.98 10.94 11.34 0.10 0.30 0.10 0.00 0.10 0.00 0.10 19.30 10.74 2.79 0.00 0.50 1.19 0.10 0.10 0.10 0.10 0.60 0.30 0.00 0.10 16.81 14.92 17.90 - 1991 11 17.70 19.59 5.06 0.53 3.27 19.80 18.22 3.48 1.79 27.07 4.95 32.97 2.21 0.00 0.42 1.69 8.43 4.21 1.16 4.53 4.32 2.32 2.21 8.95 2.84 1.58 8.32 5.16 1.16 1.26 -99.99 - 1991 12 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.31 1.64 1.13 1.43 2.25 5.63 16.17 21.18 10.64 15.66 25.18 33.36 2.15 0.31 6.55 0.41 0.92 0.72 0.61 2.56 17.40 - 1992 1 14.36 29.87 11.75 1.25 2.89 10.69 50.58 8.57 0.10 0.48 0.10 0.19 0.10 0.00 0.39 0.00 0.00 0.58 1.45 0.10 0.00 0.00 0.00 5.30 0.39 0.00 0.00 0.00 0.00 0.00 0.67 - 1992 2 0.90 10.52 12.62 2.60 0.20 0.10 5.21 5.61 7.51 2.30 0.20 9.72 6.91 9.62 3.71 1.30 10.02 0.00 0.20 3.21 19.13 19.73 7.81 2.60 4.51 6.91 2.70 2.70 14.22 -99.99 -99.99 - 1992 3 4.56 5.72 4.45 6.04 5.51 20.14 4.77 7.74 21.31 6.15 17.91 12.83 3.18 2.23 1.59 0.85 9.54 6.78 6.25 5.72 7.21 3.39 1.17 2.12 2.54 0.32 0.11 6.15 6.68 5.19 14.95 - 1992 4 1.81 0.00 0.00 3.62 5.13 0.30 0.30 0.00 0.60 1.11 2.71 3.32 5.13 5.93 0.30 2.61 8.95 0.40 0.40 3.32 0.20 3.22 9.75 7.44 11.26 8.55 4.63 3.72 8.35 6.23 -99.99 - 1992 5 1.39 0.60 0.70 4.97 5.77 6.56 6.26 5.96 2.09 2.78 16.40 2.09 0.00 0.20 0.00 0.00 0.00 0.00 0.40 7.65 1.09 0.00 0.00 0.00 0.40 0.00 0.00 1.19 0.00 0.20 0.70 - 1992 6 4.83 0.60 0.00 0.20 0.10 0.00 0.00 5.63 5.43 0.00 0.00 0.00 3.22 0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.00 2.92 1.31 0.70 1.01 0.10 4.52 5.63 -99.99 - 1992 7 0.00 10.03 7.72 0.00 0.10 0.30 1.20 0.00 0.00 4.41 5.62 1.81 1.00 0.10 3.51 4.71 9.03 1.60 2.81 3.11 1.10 1.40 16.85 1.20 4.71 12.84 1.20 0.00 0.10 0.00 8.72 - 1992 8 1.10 20.28 6.59 17.48 1.70 0.00 0.00 10.59 7.99 5.40 15.39 15.69 0.30 3.10 10.49 11.59 1.20 1.50 0.20 1.20 2.40 22.68 4.40 9.19 5.69 11.29 7.79 1.30 12.29 13.59 3.40 - 1992 9 8.30 4.40 1.30 0.80 8.40 25.61 8.30 8.90 4.30 6.50 11.31 8.20 8.20 11.71 1.40 0.70 0.50 0.30 5.00 8.70 3.20 0.30 2.20 14.81 0.20 8.00 1.50 0.10 14.91 1.10 -99.99 - 1992 10 1.77 16.06 0.73 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.61 3.44 0.00 0.00 1.56 0.10 0.10 0.10 3.44 5.63 6.57 5.42 1.04 13.76 12.41 7.19 0.00 1.77 26.17 - 1992 11 28.72 9.16 6.01 4.48 0.19 8.87 0.10 14.03 10.02 8.30 8.11 1.62 0.00 5.34 3.63 1.24 3.24 8.78 4.48 3.63 13.17 11.45 4.96 7.63 5.15 10.02 18.32 1.24 4.48 10.11 -99.99 - 1992 12 24.16 8.15 7.76 4.67 4.37 14.02 0.89 2.09 0.10 6.76 3.48 4.08 2.19 0.40 5.07 0.80 23.56 2.78 0.80 0.10 1.09 0.00 0.60 0.30 3.38 0.10 0.00 0.00 0.00 0.10 0.00 - 1993 1 6.75 1.01 16.52 10.47 2.42 0.30 11.08 20.85 9.77 13.09 15.00 6.45 9.26 19.84 15.81 8.46 8.36 14.50 16.92 7.55 9.97 8.76 29.20 6.75 1.71 1.11 9.06 1.81 1.01 2.11 0.00 - 1993 2 1.93 0.32 0.54 0.11 3.54 1.39 0.11 0.21 0.00 0.43 0.00 0.00 0.43 5.04 1.39 0.21 0.75 5.14 0.21 1.50 0.43 0.32 1.18 6.96 5.68 0.75 0.11 0.32 -99.99 -99.99 -99.99 - 1993 3 0.30 0.41 0.61 1.01 0.10 0.10 0.20 0.00 0.20 5.27 1.22 6.49 0.91 1.62 5.27 19.68 13.29 2.43 0.91 6.70 0.51 9.23 3.75 0.41 0.00 1.72 3.35 2.84 27.29 2.74 1.12 - 1993 4 0.39 0.10 14.70 11.15 16.67 1.48 1.18 16.48 11.44 0.10 0.20 0.99 1.38 0.00 4.24 4.54 11.05 17.36 7.60 10.16 2.66 4.74 3.06 0.10 5.43 0.00 0.00 0.00 0.00 0.20 -99.99 - 1993 5 0.29 0.59 0.00 0.00 0.00 0.20 9.02 0.10 0.00 0.59 0.00 0.59 18.14 14.22 4.41 20.00 15.29 0.39 1.86 3.43 0.10 0.10 0.10 0.00 0.00 0.00 0.00 1.57 7.65 11.27 0.29 - 1993 6 12.97 2.05 0.20 0.98 0.00 0.00 0.39 0.00 6.83 7.71 3.22 0.00 5.56 0.10 0.88 2.34 7.80 3.71 2.54 0.59 0.98 0.00 0.10 0.00 12.09 0.20 0.00 0.00 0.00 0.10 -99.99 - 1993 7 0.41 7.67 3.38 0.92 1.33 1.13 4.81 9.10 2.05 0.92 0.82 0.20 1.94 6.34 14.22 6.14 0.31 1.74 3.07 0.41 0.92 6.85 2.86 3.48 4.60 1.43 2.76 9.10 2.15 4.60 1.84 - 1993 8 5.18 13.87 2.93 4.59 0.59 6.35 2.15 5.37 1.56 8.30 3.52 0.39 0.78 5.27 0.78 0.20 0.59 1.37 0.10 0.29 0.49 0.29 0.68 0.10 0.00 0.00 0.00 0.39 0.39 0.00 0.00 - 1993 9 0.00 0.00 0.00 0.00 0.00 0.00 0.19 16.17 2.01 13.01 1.34 0.10 1.53 1.44 0.38 0.10 0.48 3.92 14.16 2.39 1.34 1.91 4.69 1.82 0.19 0.19 0.00 0.57 2.39 2.77 -99.99 - 1993 10 7.18 1.97 7.87 0.30 13.47 4.92 6.20 2.75 9.24 0.10 0.20 0.10 0.00 0.59 0.49 0.39 0.00 0.79 7.67 1.08 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.10 0.00 - 1993 11 0.00 0.10 7.92 0.20 0.30 6.04 7.62 14.36 6.54 0.89 5.54 8.22 2.77 0.00 10.40 0.69 0.00 0.00 0.00 0.30 0.10 0.00 0.00 0.89 5.45 0.10 0.00 0.10 14.95 7.23 -99.99 - 1993 12 13.34 15.15 24.78 6.32 2.91 15.95 11.34 22.47 8.43 8.53 2.31 15.75 1.40 18.26 10.33 1.71 8.13 22.88 2.31 2.11 8.23 4.82 5.12 1.20 0.70 0.90 6.92 14.65 7.42 3.81 2.21 - 1994 1 8.71 1.39 11.39 4.06 13.37 2.08 0.69 3.27 5.35 0.20 11.29 8.02 15.25 3.86 0.20 0.00 4.55 11.98 7.03 6.44 4.16 14.45 2.48 14.06 9.21 17.42 3.17 4.75 7.43 1.98 20.20 - 1994 2 12.10 0.10 5.70 6.40 6.40 4.00 1.80 8.00 0.50 3.20 0.00 0.00 0.10 0.00 1.00 0.00 0.20 0.70 0.00 0.00 0.10 1.20 2.50 1.20 9.20 11.20 13.60 2.20 -99.99 -99.99 -99.99 - 1994 3 4.09 9.77 6.08 15.46 7.88 9.77 16.15 9.87 5.38 5.68 5.48 16.25 14.96 4.09 8.87 5.78 3.49 2.19 1.50 0.30 13.36 19.54 12.16 3.59 0.40 0.90 10.87 6.98 3.69 11.07 6.08 - 1994 4 5.38 5.48 18.63 8.97 4.48 9.57 6.18 8.97 2.99 0.10 2.59 0.20 0.00 0.00 0.00 0.20 0.20 1.69 0.50 0.00 6.28 5.08 4.58 2.29 6.38 0.90 2.79 5.18 2.09 0.10 -99.99 - 1994 5 1.01 0.71 8.48 6.87 8.48 2.73 0.50 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.20 0.00 0.50 0.61 - 1994 6 0.20 7.51 5.14 1.38 1.88 3.46 1.68 3.46 1.68 0.00 0.00 0.00 0.00 0.30 4.45 4.55 10.77 11.76 2.87 9.98 3.16 0.99 6.72 8.00 0.49 5.24 1.48 2.67 0.30 0.00 -99.99 - 1994 7 0.00 0.00 1.37 10.80 0.39 6.87 0.79 0.00 11.78 6.28 5.99 2.75 0.00 2.55 0.00 0.00 0.00 0.00 0.00 1.37 1.47 0.00 1.37 8.74 9.32 1.96 0.10 0.00 0.39 0.98 15.12 - 1994 8 0.88 7.35 11.17 2.06 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.88 7.35 0.39 3.72 1.18 0.78 0.00 19.20 7.93 6.46 3.62 5.97 12.34 12.14 0.69 0.29 0.10 - 1994 9 0.00 0.40 11.39 1.68 2.28 2.08 4.76 5.85 10.20 8.42 4.36 4.06 1.19 0.00 0.10 0.00 0.00 3.17 3.86 0.30 0.00 0.10 0.00 0.20 0.69 1.39 0.69 1.98 0.89 4.46 -99.99 - 1994 10 7.34 6.12 0.41 2.14 1.43 4.08 0.71 0.10 0.20 0.00 0.00 0.00 0.00 0.00 0.51 0.00 0.00 0.00 7.44 4.99 4.49 12.74 2.65 9.68 4.69 3.57 2.55 2.14 7.65 9.58 4.79 - 1994 11 0.50 6.62 10.03 2.11 1.81 0.90 0.30 7.93 5.42 2.81 4.52 16.76 19.67 23.28 7.53 8.33 15.25 21.67 5.92 3.41 1.61 4.72 1.00 0.60 3.11 1.30 0.20 0.10 0.20 0.10 -99.99 - 1994 12 1.38 4.43 5.02 5.61 12.41 10.54 16.45 5.52 10.93 55.36 37.33 2.96 0.20 0.30 3.94 2.27 8.77 10.05 6.80 0.00 0.30 1.58 7.19 2.66 10.05 7.49 10.24 14.97 9.85 4.83 0.49 - 1995 1 0.71 0.91 2.22 9.60 3.64 6.36 5.76 16.77 9.60 6.36 0.30 0.81 1.31 3.54 13.74 14.55 14.85 3.94 8.38 3.54 18.89 8.89 3.33 2.32 1.52 0.81 11.01 2.83 0.51 25.15 7.98 - 1995 2 1.89 14.33 7.46 2.49 8.06 4.18 0.00 0.00 1.29 4.68 14.93 5.17 9.55 11.34 4.98 8.46 3.68 15.23 9.25 5.08 17.02 12.04 1.19 3.48 1.39 6.77 9.95 19.30 -99.99 -99.99 -99.99 - 1995 3 7.76 2.39 1.19 13.92 2.19 7.66 2.59 1.79 9.74 9.15 0.60 0.80 2.68 7.06 8.55 15.41 7.46 7.66 2.29 0.00 0.00 0.00 5.97 13.03 7.06 6.56 3.28 1.89 4.77 3.58 1.59 - 1995 4 0.20 1.21 3.12 15.09 3.92 0.80 0.30 0.00 2.01 0.00 1.01 0.00 0.00 0.00 0.00 1.81 5.13 0.60 1.01 0.80 0.80 5.33 3.32 2.11 0.00 0.00 0.00 0.00 0.50 0.20 -99.99 - 1995 5 1.92 1.01 0.00 0.00 0.00 1.11 0.61 0.61 2.63 0.00 2.63 2.42 1.11 0.20 0.20 0.81 1.31 3.54 1.92 0.10 1.72 1.41 0.10 13.74 2.63 4.55 12.22 4.44 4.75 0.91 1.31 - 1995 6 1.34 4.40 2.39 0.29 1.15 0.86 0.67 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.84 1.34 4.59 15.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1995 7 0.00 0.76 0.47 0.38 6.82 2.37 0.00 2.18 0.00 1.04 15.81 0.00 1.33 11.17 3.41 4.17 3.31 0.57 9.37 12.69 0.76 4.35 2.08 0.47 0.00 1.99 0.00 3.22 0.00 0.00 1.89 - 1995 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.20 2.89 0.60 0.00 0.00 0.00 0.00 1.50 0.00 0.00 0.50 1.00 2.89 8.27 6.28 2.49 0.60 2.19 0.50 0.10 0.00 - 1995 9 7.86 12.44 2.09 8.76 2.09 1.29 7.96 0.60 0.20 0.60 1.19 5.17 1.29 0.00 0.00 0.00 0.60 0.00 0.00 0.30 1.29 1.19 20.59 3.58 10.35 9.15 2.79 1.69 0.00 13.73 -99.99 - 1995 10 8.50 18.78 6.03 12.46 17.80 8.70 6.53 0.69 2.27 0.49 9.69 20.56 0.79 3.06 1.68 22.74 2.37 2.37 6.62 0.00 25.31 23.53 0.99 20.17 25.11 21.75 2.77 0.49 2.47 2.08 1.98 - 1995 11 0.00 0.00 0.00 0.00 0.00 1.99 1.78 1.26 4.81 0.42 12.35 0.10 0.00 1.99 14.33 0.10 0.00 0.42 0.00 5.02 3.24 3.14 15.90 13.08 6.91 0.52 0.31 0.10 0.31 0.10 -99.99 - 1995 12 0.43 4.46 3.30 0.11 0.85 1.49 0.53 0.00 0.00 0.00 0.00 0.00 0.21 0.21 0.85 0.11 0.11 0.53 0.00 0.00 11.37 8.72 3.83 2.02 0.00 0.00 0.00 0.00 0.00 0.11 8.08 - 1996 1 7.08 1.18 11.78 15.52 6.61 8.93 3.80 14.16 5.03 0.20 9.68 11.37 9.39 5.20 0.52 0.67 0.63 3.61 1.29 0.08 0.30 0.02 0.00 0.00 0.09 4.41 3.97 0.00 0.00 0.00 0.00 - 1996 2 0.00 0.00 0.00 9.11 23.54 6.50 4.55 9.45 12.80 7.50 26.60 1.04 0.02 0.00 2.84 4.23 18.26 1.27 1.00 1.22 4.10 0.82 5.92 11.13 0.34 0.29 0.00 0.00 0.00 -99.99 -99.99 - 1996 3 0.00 0.00 0.00 0.00 0.00 0.15 0.00 2.79 1.15 0.39 27.23 17.71 0.06 0.03 7.79 15.02 0.14 0.02 0.00 0.11 0.46 0.39 0.00 0.00 0.04 0.00 1.38 0.02 0.00 0.00 0.03 - 1996 4 0.23 0.57 0.00 0.06 0.00 0.00 0.41 1.00 3.19 4.48 2.88 3.33 5.59 4.55 8.18 15.47 15.86 7.39 2.32 5.17 11.13 5.49 2.14 3.52 3.31 3.98 0.02 1.90 5.33 13.38 -99.99 - 1996 5 1.65 0.00 0.11 0.29 2.57 2.16 0.07 0.00 0.00 0.49 0.48 0.32 0.00 0.00 0.00 0.00 0.33 1.25 6.09 2.96 8.85 8.61 3.01 1.50 0.67 6.82 0.06 10.56 3.19 6.17 0.00 - 1996 6 3.90 0.52 8.20 5.46 2.05 0.00 0.00 1.87 10.35 4.90 11.37 0.31 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.73 0.06 8.28 1.56 0.52 2.89 -99.99 - 1996 7 3.11 14.44 7.48 3.23 0.16 0.43 0.16 1.52 0.06 0.41 1.62 0.06 1.44 0.00 0.00 0.00 0.00 0.00 0.00 0.06 9.75 13.86 1.46 0.06 3.53 0.27 0.00 11.60 0.21 0.66 2.27 - 1996 8 1.31 0.00 0.00 0.00 4.43 4.84 0.00 4.55 6.96 1.15 0.06 0.00 0.00 0.00 1.02 0.26 0.06 0.00 1.64 12.51 6.47 10.59 0.06 2.72 5.89 5.73 1.10 0.00 0.25 0.09 0.00 - 1996 9 3.86 0.06 0.00 0.00 0.03 0.06 0.00 0.06 0.00 0.07 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.49 0.00 1.09 4.68 1.12 8.06 5.03 27.99 3.46 1.65 -99.99 - 1996 10 1.01 1.99 19.88 2.61 1.84 0.28 1.10 0.53 3.06 0.39 44.62 8.67 0.79 20.25 19.66 8.15 5.17 9.16 1.64 3.86 1.71 0.07 1.65 19.34 11.24 20.38 22.02 15.93 0.58 12.58 2.97 - 1996 11 8.39 13.64 14.64 6.23 25.79 9.70 0.45 5.39 0.97 0.22 5.83 0.07 3.85 0.21 2.02 4.46 0.67 0.06 2.39 0.52 5.63 0.71 1.31 23.42 0.44 0.28 1.45 20.36 9.16 2.81 -99.99 - 1996 12 14.34 5.98 21.39 0.39 0.12 0.00 8.88 1.38 0.46 0.00 0.02 0.43 0.57 1.21 1.75 1.27 8.49 24.83 1.06 0.00 0.00 0.00 0.00 0.02 0.31 5.03 0.32 0.10 0.44 1.47 0.77 - 1997 1 0.79 1.00 0.17 0.11 0.00 0.07 0.00 0.04 0.00 2.39 8.61 1.26 1.14 0.41 0.13 1.21 6.02 2.19 0.21 0.06 0.00 0.06 1.98 2.78 0.54 0.12 0.03 0.11 0.00 0.00 0.17 - 1997 2 4.73 0.74 30.13 0.41 3.50 3.91 0.45 16.62 18.39 11.64 10.16 10.58 1.75 0.99 5.91 15.44 28.82 14.05 19.47 18.46 0.30 13.70 13.15 4.07 1.48 7.01 19.09 6.43 -99.99 -99.99 -99.99 - 1997 3 12.41 1.61 0.06 0.34 3.15 3.86 7.82 0.06 0.02 0.00 0.48 1.57 5.79 3.36 3.13 2.12 3.89 5.91 0.86 0.06 0.00 14.93 4.80 0.51 6.45 5.04 10.62 0.65 0.00 0.00 0.12 - 1997 4 0.00 2.89 1.18 6.72 0.82 1.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.23 0.00 0.29 0.50 0.03 22.07 0.45 0.18 0.74 8.26 3.69 0.00 0.00 -99.99 - 1997 5 0.06 0.00 13.86 19.95 3.44 1.16 3.88 0.34 0.06 14.75 7.90 8.36 2.90 0.00 0.00 6.58 0.65 3.15 7.38 2.72 0.00 0.00 0.00 0.00 0.15 0.00 0.06 0.00 0.00 0.00 0.00 - 1997 6 0.06 0.00 0.00 0.00 4.36 2.97 3.04 0.55 2.90 7.01 9.27 4.93 8.57 0.35 0.06 0.00 2.30 9.54 9.30 6.44 2.76 0.65 0.00 5.28 2.22 1.96 1.56 1.31 0.00 5.93 -99.99 - 1997 7 4.75 4.21 4.02 0.00 0.14 1.93 1.01 0.00 0.00 1.25 1.44 3.44 1.64 2.21 3.17 0.00 0.00 0.00 0.00 10.59 0.00 0.34 8.57 5.09 0.85 10.20 6.62 0.71 8.96 7.62 4.36 - 1997 8 0.72 0.00 0.00 0.00 0.00 0.00 0.00 1.98 0.15 0.09 0.75 0.00 3.16 0.24 0.06 1.17 0.15 0.00 1.32 1.84 3.24 0.00 4.01 0.42 2.15 1.79 7.42 3.58 2.32 0.04 10.18 - 1997 9 1.21 32.20 15.59 4.77 0.23 3.19 0.51 0.00 0.00 0.00 4.21 4.20 4.91 10.88 19.96 24.73 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.46 0.13 1.14 -99.99 - 1997 10 0.00 0.00 0.03 0.69 2.65 7.04 0.48 3.93 18.96 1.66 0.00 0.31 0.22 9.29 12.41 20.72 3.42 0.13 0.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.24 1.37 0.16 - 1997 11 0.71 2.37 0.00 0.66 12.79 1.08 3.35 5.04 5.84 6.23 8.28 5.75 0.87 3.94 4.80 7.76 24.61 5.85 6.96 18.47 3.56 3.50 11.48 4.74 1.77 0.83 0.43 3.57 0.76 0.00 -99.99 - 1997 12 0.10 0.04 0.21 0.58 18.33 3.91 9.38 10.28 18.16 20.28 3.33 2.15 0.18 0.12 0.95 0.00 0.58 13.86 4.58 0.34 0.20 4.71 9.94 21.20 17.48 2.50 8.09 0.48 4.36 15.98 1.12 - 1998 1 18.73 13.84 8.47 8.31 0.25 10.25 2.77 15.62 2.57 0.53 1.73 0.72 17.40 6.90 1.67 4.56 6.87 2.33 0.17 4.82 1.36 7.63 0.62 0.08 0.00 0.06 0.00 0.00 0.06 0.17 1.32 - 1998 2 0.33 0.47 2.84 1.39 0.43 10.29 2.19 5.58 10.28 19.47 22.01 0.30 1.85 8.01 4.40 0.31 0.07 0.49 1.91 6.05 3.22 4.29 0.40 0.00 0.82 3.90 7.21 4.47 -99.99 -99.99 -99.99 - 1998 3 8.15 13.38 1.78 1.71 2.86 20.85 2.07 0.03 1.48 18.39 0.29 1.28 0.00 0.00 0.09 0.75 0.37 0.07 0.00 0.00 0.00 0.00 6.78 0.16 15.94 5.33 0.76 0.09 16.37 0.40 0.00 - 1998 4 1.36 12.62 4.76 4.25 1.10 5.53 9.16 13.35 0.50 0.16 0.25 0.11 0.40 0.17 0.10 0.57 0.00 0.00 4.51 0.59 11.73 10.38 0.06 4.77 7.66 4.57 1.79 0.99 2.56 0.00 -99.99 - 1998 5 0.00 0.00 0.06 3.32 4.91 13.54 5.99 0.23 0.06 0.00 2.04 0.00 0.00 0.76 0.00 0.06 0.00 0.00 0.00 0.27 0.05 0.06 0.00 0.13 0.15 0.34 1.96 8.19 12.87 5.08 3.71 - 1998 6 0.24 3.85 0.13 0.00 2.59 6.08 1.75 20.59 8.35 14.99 0.75 0.00 1.37 0.82 6.24 0.83 0.00 2.64 0.00 0.04 0.92 7.07 20.38 3.27 5.12 12.65 2.81 0.08 1.70 0.00 -99.99 - 1998 7 0.00 0.00 0.00 0.72 0.00 0.00 0.56 8.27 0.10 9.82 8.76 18.66 0.08 1.25 0.44 9.82 5.65 3.01 31.78 4.08 9.67 10.83 0.59 0.34 4.08 7.11 1.58 5.23 2.51 1.73 4.39 - 1998 8 0.00 17.98 3.83 1.84 7.32 3.14 15.16 1.34 0.00 0.10 6.27 1.04 6.44 7.36 1.79 21.66 0.28 0.00 4.46 15.85 1.20 0.63 4.22 0.00 2.51 0.00 0.09 0.00 0.06 0.00 10.38 - 1998 9 15.86 2.55 0.00 0.00 0.00 1.77 15.47 15.64 12.93 1.68 6.24 1.82 0.00 0.18 0.85 0.00 8.69 0.14 0.00 0.06 0.06 0.12 0.00 0.06 0.00 0.00 4.41 0.03 0.07 3.61 -99.99 - 1998 10 0.05 0.05 0.02 0.94 0.61 0.26 0.06 1.35 14.43 2.73 3.75 9.22 6.34 9.23 16.03 20.73 2.54 3.19 1.38 51.27 8.02 32.73 12.92 20.89 5.00 23.83 14.42 14.59 5.33 1.62 0.35 - 1998 11 1.22 29.48 0.55 2.61 4.68 1.07 7.29 15.07 6.85 3.42 7.74 8.43 0.82 0.24 0.00 0.00 1.44 1.19 2.15 4.85 21.23 3.29 15.23 0.77 16.64 3.93 24.69 2.45 0.76 1.21 -99.99 - 1998 12 1.23 0.80 0.19 0.54 0.00 0.39 10.52 3.31 2.01 8.81 11.86 5.58 18.06 6.46 1.22 0.54 9.96 4.55 0.06 0.00 1.87 8.88 0.82 13.95 4.05 20.18 3.63 0.55 5.42 1.22 2.30 - 1999 1 16.86 10.72 12.49 11.43 21.82 1.30 6.80 0.20 0.06 0.06 13.42 0.93 7.70 14.13 16.22 0.80 3.33 21.44 3.87 2.92 0.82 9.67 8.55 28.93 4.52 1.04 6.63 2.70 1.09 0.26 0.04 - 1999 2 0.00 0.95 7.02 3.09 0.46 0.66 0.00 0.00 0.00 0.09 0.04 0.83 3.42 0.98 2.19 2.79 6.44 10.71 2.68 9.08 6.73 0.00 0.56 0.39 1.20 1.22 13.29 10.68 -99.99 -99.99 -99.99 - 1999 3 4.42 11.10 1.70 0.20 0.08 0.23 0.03 0.20 0.42 0.00 3.38 1.37 0.18 1.77 4.21 0.27 2.46 0.54 0.43 14.68 0.53 8.39 3.70 1.05 0.16 0.08 0.00 43.57 0.97 0.27 0.61 - 1999 4 0.00 1.10 2.96 3.72 12.67 5.08 1.64 0.34 2.92 1.48 20.96 4.63 1.59 2.59 3.76 0.00 4.25 2.63 1.56 20.23 16.46 5.31 0.24 0.04 0.00 0.00 0.00 0.00 0.00 0.00 -99.99 - 1999 5 0.00 0.00 0.00 0.00 1.34 0.28 11.20 5.74 13.17 11.71 10.90 3.51 3.91 0.08 0.28 0.00 0.00 0.00 0.00 3.60 5.58 4.76 5.40 1.32 6.83 1.99 6.44 5.10 0.03 0.00 0.00 - 1999 6 0.08 18.02 2.64 3.04 2.53 0.87 2.86 0.03 0.00 0.00 0.17 4.24 2.56 0.48 1.53 2.90 0.00 1.75 27.95 0.75 0.11 0.04 0.00 0.00 0.00 16.49 0.56 5.03 1.21 0.48 -99.99 - 1999 7 2.37 7.28 5.11 0.17 0.14 0.99 2.70 0.08 2.69 3.27 0.00 0.49 4.30 0.37 12.83 4.74 0.98 7.32 14.18 4.51 1.77 0.00 0.24 0.12 0.08 0.08 0.00 0.00 0.08 0.00 0.00 - 1999 8 4.48 2.22 0.60 0.00 9.83 2.12 0.61 0.18 0.00 0.00 0.00 6.09 5.08 4.65 3.83 3.88 4.57 0.21 0.00 0.00 0.08 0.00 0.00 0.64 8.40 5.04 0.06 0.00 5.95 1.98 0.37 - 1999 9 0.35 0.10 1.34 0.63 3.54 17.99 2.78 11.42 0.00 2.26 11.38 2.08 2.52 0.00 6.47 11.72 5.64 4.08 23.49 11.34 1.67 6.03 11.57 3.86 0.31 0.95 3.76 12.20 3.29 6.77 -99.99 - 1999 10 6.63 0.96 0.38 0.08 0.00 4.09 2.97 3.58 1.03 6.32 1.09 0.03 0.37 0.12 0.00 1.46 0.00 0.37 0.37 0.15 15.59 5.29 8.72 3.53 0.00 0.22 6.09 0.37 1.60 6.02 15.18 - 1999 11 12.07 3.29 0.50 35.99 10.41 0.44 4.19 0.00 0.00 0.00 0.00 0.00 0.00 0.49 3.44 1.26 0.52 0.00 0.13 0.00 1.32 0.80 6.01 5.38 7.23 7.55 34.78 27.81 6.93 8.67 -99.99 - 1999 12 7.15 30.51 13.58 0.74 18.93 8.79 11.20 24.91 6.36 7.42 11.01 8.34 3.42 6.16 0.81 17.19 0.26 0.96 0.00 18.18 11.06 12.20 20.98 21.22 1.75 4.75 1.38 1.30 12.23 3.17 4.04 - 2000 1 0.36 8.37 1.28 11.18 8.94 7.31 9.36 2.41 0.14 5.99 15.19 1.51 0.06 0.00 0.08 0.00 0.00 0.00 2.02 0.00 0.33 0.00 0.15 0.00 0.15 0.27 5.01 16.61 3.75 12.60 23.70 - 2000 2 3.04 2.68 6.16 1.96 3.55 5.19 13.39 6.31 13.23 2.48 8.66 4.29 4.39 3.59 12.57 4.13 8.07 2.16 0.08 8.32 0.00 2.25 6.38 4.08 0.14 29.82 6.67 5.12 1.48 -99.99 -99.99 - 2000 3 8.72 19.75 0.49 0.19 3.62 7.41 5.97 16.85 2.23 0.93 0.08 1.20 2.12 0.31 0.43 0.08 0.00 0.48 0.06 0.08 0.65 0.00 23.08 2.80 1.04 0.25 0.00 0.00 0.64 0.49 3.20 - 2000 4 1.44 1.87 0.57 0.00 0.00 0.00 1.18 0.00 0.00 4.02 5.87 3.94 0.45 0.00 0.06 0.53 5.62 0.13 8.75 4.70 0.00 1.21 2.36 5.72 9.77 19.67 0.00 0.00 0.38 0.00 -99.99 - 2000 5 0.00 0.08 0.00 0.00 0.00 0.00 0.00 0.00 1.38 0.00 0.00 0.00 0.00 0.00 2.44 20.05 2.00 1.62 1.36 2.70 0.00 5.06 2.77 4.17 2.94 3.91 0.75 1.12 0.17 0.14 6.45 - 2000 6 3.62 3.19 6.31 0.14 3.05 2.37 7.31 0.75 9.13 5.77 1.49 5.75 0.00 0.00 0.00 0.00 0.00 0.00 0.36 12.83 5.45 5.66 0.90 1.61 0.00 0.00 0.00 0.17 0.00 0.00 -99.99 - 2000 7 0.00 0.06 0.00 0.00 9.91 0.17 0.41 8.03 9.56 0.12 0.00 0.85 0.33 0.08 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 0.00 0.38 3.05 2.45 3.88 14.15 - 2000 8 4.86 4.72 0.08 0.00 1.00 0.34 0.81 8.79 8.01 0.08 0.26 11.03 18.67 2.90 2.62 5.53 0.60 1.61 1.60 0.00 4.21 0.00 0.00 0.00 7.75 4.61 1.13 0.52 0.06 8.54 21.58 - 2000 9 3.47 0.06 0.00 15.35 10.69 7.64 1.58 2.09 3.55 28.52 2.58 0.08 3.63 2.81 2.09 0.62 11.93 1.96 32.22 2.14 10.67 2.54 1.48 16.99 1.46 8.57 20.43 9.91 5.66 0.56 -99.99 - 2000 10 4.46 8.19 7.66 13.15 0.20 5.15 6.56 2.35 24.03 6.56 1.27 0.29 3.32 0.00 3.92 2.26 20.22 3.26 2.80 4.94 0.08 9.77 11.63 46.13 4.66 11.57 0.86 14.72 4.09 0.67 8.47 - 2000 11 6.20 0.00 4.15 11.25 4.32 12.97 1.90 0.06 0.14 6.52 23.40 3.45 0.13 1.20 12.82 5.30 1.23 5.38 0.80 0.06 2.94 5.86 0.06 4.46 26.14 5.00 7.21 19.97 3.71 12.08 -99.99 - 2000 12 18.26 0.61 18.45 18.63 10.87 3.97 24.36 20.86 6.57 8.69 9.47 24.06 0.89 0.08 0.41 0.58 4.71 0.00 11.67 6.39 0.00 0.00 0.52 0.55 0.00 0.49 1.29 0.80 0.57 1.71 22.41 - 2001 1 7.20 1.62 6.76 0.44 2.65 7.70 2.02 0.22 0.13 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.04 7.89 3.03 14.61 14.84 1.99 4.07 3.69 1.90 2.74 2.80 4.56 0.35 - 2001 2 5.89 6.21 0.85 11.75 17.31 10.97 0.00 0.11 1.64 18.40 0.36 0.00 0.05 1.17 0.00 0.05 0.17 1.69 0.11 1.72 0.11 0.78 0.13 0.70 8.23 6.91 5.12 0.00 -99.99 -99.99 -99.99 - 2001 3 0.00 0.06 0.48 0.02 0.15 9.26 0.59 5.89 1.55 1.99 8.34 0.11 0.05 0.48 0.05 0.04 0.11 0.17 0.00 0.00 0.17 1.92 0.78 0.14 0.08 2.10 16.76 0.99 0.42 11.38 1.34 - 2001 4 1.58 1.66 1.43 0.88 7.40 11.94 0.68 1.20 5.74 0.18 0.00 0.00 0.56 1.10 0.14 0.20 2.46 0.05 0.09 0.00 9.01 11.72 0.40 2.75 2.13 0.21 7.53 5.90 3.17 0.09 -99.99 - 2001 5 0.00 0.63 0.13 0.00 0.00 0.00 0.06 0.00 0.00 0.00 0.00 0.00 1.08 1.96 12.32 1.83 1.97 0.21 0.09 0.00 0.00 0.00 0.00 0.00 3.94 0.50 0.42 1.92 5.74 3.93 0.80 - 2001 6 4.34 0.04 0.00 0.00 2.11 5.41 1.75 4.51 7.49 0.43 0.11 0.00 0.00 4.22 2.11 0.00 0.00 4.22 13.41 0.17 0.06 0.00 0.17 0.11 0.04 11.79 3.59 6.04 7.55 1.04 -99.99 - 2001 7 0.25 1.91 10.49 0.00 0.32 3.79 0.49 0.67 4.70 14.38 4.25 5.81 1.98 3.06 0.05 0.00 0.05 0.00 0.05 5.50 2.06 0.40 5.10 4.15 0.37 1.10 0.11 0.05 0.09 8.34 0.05 - 2001 8 7.84 3.41 0.17 2.92 0.30 1.34 5.51 3.09 1.73 3.05 7.45 6.98 10.60 10.97 5.56 6.32 0.00 4.86 12.39 2.06 4.50 0.00 0.05 0.45 3.14 0.12 0.00 0.06 7.41 3.20 0.00 - 2001 9 5.97 2.88 1.21 2.66 0.09 4.04 1.47 0.11 0.30 0.04 1.80 14.28 0.56 4.14 3.20 0.97 0.09 0.04 0.27 1.66 0.00 0.05 0.35 0.31 1.58 0.78 6.82 8.06 5.55 16.74 -99.99 - 2001 10 7.45 4.90 7.55 8.72 9.57 9.98 18.88 10.30 4.39 0.45 3.89 2.57 0.20 10.40 5.62 1.01 13.01 1.95 12.53 9.88 11.76 3.86 7.34 1.29 4.80 6.65 1.80 1.42 2.47 6.23 0.60 - 2001 11 0.00 1.16 1.97 2.68 10.56 2.68 7.70 0.83 1.97 0.91 10.99 0.59 1.35 0.41 0.63 0.99 2.77 4.27 1.77 6.23 10.02 0.61 1.49 5.51 3.49 10.56 7.85 12.88 10.37 9.61 -99.99 - 2001 12 4.82 0.53 18.52 18.25 6.28 11.17 1.28 0.05 0.00 0.00 0.00 0.16 0.04 0.00 0.09 0.17 0.09 0.60 0.57 1.42 1.62 2.51 4.20 2.82 0.22 4.20 7.67 0.26 1.80 0.23 1.26 - 2002 1 0.00 0.00 0.00 6.57 0.44 0.18 0.19 0.12 0.00 0.64 4.67 3.69 1.35 6.18 0.00 11.71 2.50 11.78 10.52 4.24 6.22 17.67 11.76 0.76 25.76 6.95 5.67 8.00 14.73 3.11 29.81 - 2002 2 17.23 7.29 6.27 16.19 6.92 10.78 5.18 13.71 5.49 22.62 1.30 2.44 0.00 0.08 0.00 0.28 1.20 9.88 23.82 1.99 14.12 9.66 5.90 9.24 16.77 14.96 16.55 0.61 -99.99 -99.99 -99.99 - 2002 3 0.24 0.83 0.16 0.00 12.93 2.74 0.06 6.81 22.30 4.79 0.27 0.08 0.00 1.44 12.13 4.70 2.60 0.08 2.23 9.75 5.12 0.40 0.00 3.80 0.39 0.00 0.00 0.08 0.00 2.52 5.13 - 2002 4 7.30 2.01 3.34 0.00 0.00 0.00 0.00 0.00 0.08 0.00 1.96 0.00 1.76 0.00 0.00 0.91 0.00 0.41 0.13 14.57 10.41 0.28 0.06 1.96 12.26 2.09 3.36 8.41 2.40 10.67 -99.99 - 2002 5 2.73 1.74 0.77 0.00 0.00 0.00 4.83 0.00 1.30 0.00 0.00 0.86 10.23 1.00 0.67 0.00 3.01 8.08 13.50 6.38 11.43 6.52 18.25 15.82 2.87 6.36 0.00 0.00 7.74 8.59 0.08 - 2002 6 6.45 7.12 3.57 1.50 0.28 0.00 0.85 7.03 17.09 12.81 6.26 15.17 12.40 9.62 7.96 11.96 0.08 1.66 0.14 2.50 5.37 2.61 0.32 1.61 0.82 1.63 0.00 0.33 2.16 15.87 -99.99 - 2002 7 3.88 3.29 0.48 3.56 1.77 1.93 4.88 0.65 4.60 1.60 4.00 3.20 0.00 1.00 0.76 0.00 0.00 3.96 4.51 5.62 1.34 6.54 0.87 1.38 0.08 0.06 0.20 8.27 10.66 11.31 1.70 - 2002 8 0.06 19.82 0.00 0.08 0.00 4.73 0.54 0.42 0.14 1.75 5.53 5.15 8.41 6.64 0.08 0.00 5.36 8.33 0.04 0.08 0.00 0.18 1.10 0.60 0.00 0.08 0.00 2.02 12.01 16.68 0.00 - 2002 9 0.00 0.00 0.00 0.79 6.04 5.54 17.38 4.39 20.32 0.76 0.08 0.00 0.00 0.16 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.08 1.11 1.29 0.00 0.00 0.31 1.66 -99.99 - 2002 10 0.00 6.76 0.81 3.70 0.62 0.08 4.37 0.85 0.00 2.05 33.88 3.28 2.31 0.69 0.05 0.18 0.37 0.20 0.86 19.03 49.35 8.09 3.21 26.97 7.05 27.31 3.32 6.13 1.54 0.07 4.25 - 2002 11 13.96 17.90 1.60 3.18 19.92 5.09 9.05 6.15 11.05 0.96 7.68 6.97 7.63 10.23 2.56 0.00 1.74 1.16 6.15 5.66 9.76 2.40 7.56 2.52 7.44 12.31 22.30 7.74 2.87 14.91 -99.99 - 2002 12 11.90 1.30 9.31 0.70 0.06 0.43 0.06 0.00 0.00 0.00 0.00 0.00 1.80 0.17 2.90 0.31 0.06 0.13 0.00 0.00 14.66 11.92 17.85 5.29 2.28 6.71 2.93 0.00 6.12 0.06 6.83 - 2003 1 10.32 5.31 0.22 0.00 0.08 0.00 0.00 1.13 0.00 0.00 0.70 6.29 2.25 6.25 1.89 11.06 1.20 17.95 7.45 17.29 1.67 0.96 1.42 18.39 4.42 0.83 11.67 3.39 0.04 0.00 6.59 - 2003 2 8.00 10.02 8.07 0.67 0.24 4.37 0.08 9.21 0.39 12.57 0.00 0.43 0.33 0.06 0.07 0.00 0.00 0.00 0.00 0.00 0.08 1.72 0.92 0.00 0.22 0.92 2.83 16.74 -99.99 -99.99 -99.99 - 2003 3 12.62 2.53 7.83 7.70 1.07 2.79 15.18 15.97 6.11 2.15 2.90 0.06 0.00 0.00 0.00 0.14 0.12 0.06 0.00 0.00 0.08 0.08 0.08 0.04 0.00 0.38 0.08 0.00 0.00 0.14 11.25 - 2003 4 2.23 0.24 2.84 0.12 0.00 0.00 0.06 0.00 0.00 0.00 0.00 0.00 2.33 0.00 0.00 0.00 0.00 0.00 0.00 0.00 7.26 0.09 0.00 1.42 12.25 1.81 9.48 7.51 6.40 1.56 -99.99 - 2003 5 2.87 5.85 16.61 17.97 2.40 1.75 4.54 0.29 2.89 1.55 3.70 4.22 0.44 0.07 4.40 13.80 5.48 6.60 2.22 3.12 9.82 2.50 4.82 2.75 0.06 0.06 0.52 3.74 0.34 0.00 0.06 - 2003 6 4.44 0.00 0.73 0.21 4.38 0.32 1.22 3.73 10.65 2.94 2.18 0.64 0.10 0.00 0.00 0.00 4.28 2.92 2.29 0.00 0.12 0.82 0.13 0.00 0.00 4.62 9.20 0.38 0.07 6.55 -99.99 - 2003 7 1.73 0.00 0.00 0.00 0.00 1.77 1.59 0.09 2.04 6.92 0.08 0.00 0.81 0.00 0.00 0.00 0.00 0.00 0.00 13.76 3.72 1.65 5.34 8.82 0.51 0.43 2.40 13.55 15.93 2.46 7.49 - 2003 8 0.36 0.00 0.00 0.11 0.66 0.00 0.00 0.00 1.46 0.18 0.06 0.81 0.09 0.00 0.00 0.00 7.57 0.54 0.15 5.71 8.19 0.13 0.00 0.00 0.00 0.00 0.00 0.96 0.00 0.00 0.00 - 2003 9 0.00 0.00 0.00 0.00 2.32 3.59 16.03 3.91 4.41 2.57 4.55 0.89 0.48 1.18 0.24 0.07 0.55 8.54 1.42 2.00 17.56 1.30 0.48 0.00 5.18 0.25 1.04 6.67 2.82 0.51 -99.99 - 2003 10 0.07 0.39 0.39 1.93 7.97 8.71 1.04 2.29 9.90 0.39 2.25 0.07 0.17 0.00 0.00 0.00 0.07 0.16 0.00 2.78 2.11 0.39 0.27 0.32 1.50 0.06 0.31 6.82 5.57 0.83 1.17 - 2003 11 18.79 5.22 3.52 1.83 1.06 0.00 0.00 0.19 0.15 2.22 13.83 4.60 16.26 9.87 1.87 2.24 3.12 6.14 4.83 0.43 0.74 1.59 0.27 4.80 10.26 15.51 2.26 24.56 32.81 2.88 -99.99 - 2003 12 1.49 0.72 0.04 0.32 0.26 0.00 0.00 1.06 1.21 8.92 1.80 11.65 8.91 0.73 0.31 0.88 0.00 0.40 19.03 14.13 0.46 7.22 2.87 7.40 10.48 11.23 3.11 0.71 0.04 0.05 31.23 - 2004 1 0.74 9.50 0.24 6.78 2.87 2.23 9.82 17.15 3.63 5.28 8.32 15.17 6.54 7.27 6.74 0.10 0.21 7.11 7.71 2.10 4.45 0.74 10.29 2.98 0.78 0.11 6.48 2.51 4.17 12.05 27.59 - 2004 2 14.80 17.98 1.77 6.66 2.34 6.50 9.60 0.00 1.54 0.03 0.04 1.62 0.34 0.06 0.17 2.42 0.00 0.16 0.14 0.05 0.00 0.04 2.97 0.32 0.66 0.86 0.06 0.00 0.12 -99.99 -99.99 - 2004 3 0.05 8.49 6.82 3.81 0.33 0.45 0.15 0.11 0.03 0.34 0.62 2.72 7.90 8.36 6.53 3.26 0.71 21.96 12.21 15.37 7.66 3.28 0.30 1.17 0.34 0.65 0.81 0.50 0.00 0.00 0.33 - 2004 4 2.98 7.00 3.91 6.63 3.12 1.22 0.10 2.30 0.08 0.36 0.51 0.64 8.11 8.85 4.28 0.00 20.60 5.77 8.09 7.54 2.52 3.79 0.80 0.14 0.10 0.67 0.10 0.44 0.17 0.00 -99.99 - 2004 5 0.19 1.38 18.94 3.38 5.93 0.64 0.25 2.26 0.15 2.98 0.00 0.10 0.46 0.06 0.00 0.13 0.06 1.40 0.00 0.65 1.45 0.00 0.07 0.13 0.07 0.00 0.87 4.46 1.64 0.00 1.87 - 2004 6 0.93 16.62 1.74 1.22 1.85 0.37 0.00 2.20 2.34 11.15 0.52 0.32 0.57 0.27 0.17 5.80 0.79 0.81 0.79 1.17 0.91 11.04 12.03 1.11 1.62 15.82 4.89 4.96 13.28 2.23 -99.99 - 2004 7 4.28 10.20 1.52 1.56 2.71 0.18 0.11 0.00 0.75 0.58 0.35 0.06 6.65 0.20 1.07 5.95 2.03 0.44 0.98 6.58 1.91 1.13 3.77 12.24 0.19 0.15 1.40 0.30 2.56 0.00 0.91 - 2004 8 0.00 3.57 14.97 1.90 5.16 0.18 0.07 38.07 13.53 23.43 3.59 11.83 0.09 0.89 15.29 4.95 3.18 9.54 5.95 0.78 0.00 4.27 1.14 1.03 0.07 17.55 0.12 3.61 4.90 0.52 0.00 - 2004 9 2.10 0.83 2.79 1.68 0.10 0.00 0.00 0.00 0.00 15.11 3.44 13.58 17.81 2.71 6.14 12.38 10.28 2.31 18.35 8.82 2.17 4.64 0.16 2.82 0.93 2.15 1.55 1.27 0.00 1.71 -99.99 - 2004 10 10.83 3.62 32.07 2.94 13.72 7.16 0.11 0.10 0.00 0.00 0.00 2.84 3.46 2.54 6.69 1.76 2.35 6.08 8.61 26.40 3.94 3.61 10.31 9.34 6.88 3.29 5.62 3.12 2.38 0.00 0.00 - 2004 11 0.00 3.32 6.14 3.86 12.96 3.28 2.00 2.80 3.84 0.25 2.97 0.48 0.55 1.44 6.75 7.90 0.62 0.12 1.01 5.14 15.26 3.94 1.07 0.80 6.50 0.41 6.71 0.05 1.13 3.82 -99.99 - 2004 12 0.23 3.36 1.66 1.12 1.09 1.21 0.81 0.65 1.70 1.14 0.58 0.00 7.20 7.01 95.15 8.77 8.18 0.07 1.39 4.11 17.83 2.04 6.80 10.48 3.25 0.41 13.14 7.02 3.37 10.27 1.53 - 2005 1 12.88 4.28 6.28 1.97 5.02 10.13 41.49 13.45 12.38 1.33 6.56 4.71 0.99 2.95 3.91 6.36 8.99 4.17 6.60 5.04 0.85 0.06 0.50 0.26 0.50 0.06 0.00 0.10 0.33 1.14 0.36 - 2005 2 1.18 1.06 0.88 11.78 0.30 0.06 3.27 8.31 7.19 0.29 28.05 6.56 1.20 0.00 0.07 1.24 1.18 1.29 0.16 0.86 2.36 0.64 0.65 1.65 0.24 0.09 0.36 6.24 -99.99 -99.99 -99.99 - 2005 3 0.19 0.19 2.65 0.22 0.84 0.38 0.05 0.08 1.39 1.43 1.79 0.05 0.14 22.02 7.40 5.65 7.43 1.41 0.04 1.78 14.81 1.63 6.29 1.99 0.08 0.06 7.38 0.90 0.00 0.00 6.36 - 2005 4 1.55 0.06 0.73 2.09 13.63 5.02 0.05 2.11 3.79 0.00 1.82 0.30 9.56 3.49 5.65 1.68 30.24 20.48 0.25 0.00 0.12 0.00 0.00 0.00 3.56 2.98 16.17 3.86 0.12 1.36 -99.99 - 2005 5 6.98 8.42 1.28 1.01 3.44 3.34 1.71 0.68 0.08 0.07 0.00 0.00 0.00 0.00 0.00 0.63 1.20 27.99 5.63 4.63 5.30 7.33 6.01 6.33 19.66 1.55 7.36 2.41 0.00 0.07 11.97 - 2005 6 19.31 8.38 4.75 4.56 0.22 0.00 0.00 0.00 0.13 0.00 0.08 0.87 3.30 11.25 4.36 6.04 0.23 2.79 0.07 4.07 3.82 0.12 0.55 4.36 0.07 0.07 0.12 1.18 7.20 4.04 -99.99 - 2005 7 0.09 2.66 3.59 0.05 3.45 0.19 0.66 1.23 0.10 0.09 0.00 0.29 0.22 5.94 0.10 5.29 2.60 3.07 1.64 0.38 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.32 0.29 0.00 0.00 - 2005 8 0.00 3.62 3.40 1.46 0.51 0.36 0.00 0.00 2.03 0.12 4.62 11.61 3.07 0.65 0.93 2.91 28.22 0.92 0.15 0.56 6.97 3.11 19.83 5.02 5.66 5.37 1.05 17.40 0.06 0.21 21.53 - 2005 9 0.00 0.07 0.05 0.41 0.07 0.52 6.39 4.69 2.12 0.00 0.00 6.12 4.55 3.99 0.92 0.74 3.91 1.77 10.89 0.25 1.43 7.92 2.29 9.52 1.98 16.68 3.18 5.94 15.21 6.05 -99.99 - 2005 10 0.69 0.14 0.06 0.00 0.00 0.39 12.54 1.05 4.99 21.65 20.29 1.75 0.10 0.05 0.00 0.25 0.00 11.44 8.83 3.99 2.12 10.30 20.93 15.63 4.75 8.34 0.22 4.88 11.24 4.53 7.37 - 2005 11 4.78 11.24 7.90 4.36 6.12 2.29 11.08 3.97 7.06 5.32 6.81 1.60 1.02 3.16 2.18 0.00 0.10 0.00 0.10 0.16 0.12 0.09 2.00 4.55 2.12 0.76 0.13 1.28 3.49 4.86 -99.99 - 2005 12 22.50 4.75 1.61 5.53 5.30 0.39 15.49 2.34 5.82 0.44 0.31 0.00 0.28 0.56 2.35 0.16 0.10 4.64 0.31 1.35 4.09 4.15 0.21 0.04 0.21 0.06 0.13 0.50 31.29 6.36 6.50 - 2006 1 0.42 5.22 0.08 0.18 0.10 0.13 0.10 0.04 9.81 12.17 1.63 5.52 7.03 1.85 8.16 7.96 3.04 14.86 7.05 5.71 0.05 0.01 0.04 0.00 0.07 0.25 0.24 0.21 0.14 0.16 0.02 - 2006 2 0.00 0.08 0.07 0.00 0.02 0.25 7.72 0.22 0.02 0.66 9.19 2.70 10.43 16.08 1.86 4.42 0.27 0.43 1.36 0.56 0.87 0.72 2.81 0.42 0.36 0.17 2.04 0.07 -99.99 -99.99 -99.99 - 2006 3 0.08 0.41 2.80 1.09 1.07 4.96 10.57 1.53 5.76 5.47 21.77 13.03 16.49 3.11 0.04 0.42 0.00 0.10 0.32 0.15 0.04 0.00 0.00 8.77 17.17 23.03 7.99 2.79 10.92 8.34 6.35 - 2006 4 3.76 3.15 0.69 0.11 3.64 1.84 7.44 0.88 2.27 10.45 2.19 2.61 3.66 0.12 0.95 3.41 0.61 0.94 1.86 1.17 0.00 4.84 0.00 7.09 0.90 0.08 0.02 0.00 0.00 15.65 -99.99 - 2006 5 1.51 7.40 1.07 13.31 0.00 3.79 1.73 0.00 0.03 0.00 0.06 0.08 0.14 9.34 6.65 5.32 12.94 5.55 3.56 3.56 11.04 1.42 5.61 1.92 7.67 2.59 2.24 4.88 0.21 0.00 0.30 - 2006 6 0.00 0.00 0.09 0.03 0.00 0.09 0.00 0.03 0.00 0.26 8.35 0.18 0.00 0.00 0.41 0.95 3.56 13.12 0.02 18.11 10.75 0.24 0.09 0.34 0.10 0.08 0.05 1.22 4.97 4.40 -99.99 - 2006 7 0.19 7.54 0.16 0.47 0.23 1.41 1.54 19.31 0.70 4.44 0.16 1.01 0.03 0.00 0.00 0.00 0.00 0.00 2.12 1.80 0.26 1.90 0.37 0.09 0.05 0.03 2.93 0.93 8.53 4.85 8.09 - 2006 8 1.75 1.97 1.22 3.35 2.11 1.18 0.03 2.81 1.13 0.03 0.00 0.00 0.03 0.16 1.85 0.29 6.68 19.78 2.84 12.41 0.47 3.27 0.20 0.03 4.69 4.27 7.10 3.75 0.58 11.90 5.66 - 2006 9 4.28 20.76 1.24 10.31 15.09 0.44 0.08 0.07 0.03 0.00 12.24 2.01 2.84 14.42 0.13 1.49 1.29 16.38 12.25 23.89 0.17 0.95 5.01 9.30 4.28 4.32 9.56 3.37 6.93 11.54 -99.99 - 2006 10 3.38 8.36 2.03 3.20 6.41 4.70 4.82 7.64 0.55 4.73 10.34 0.12 0.09 0.14 0.11 2.30 7.47 0.86 8.15 3.00 9.69 3.85 4.38 3.48 33.56 6.90 6.17 1.35 7.38 7.48 0.23 - 2006 11 0.09 0.10 0.49 0.01 0.00 1.01 9.42 2.69 0.29 19.94 6.21 7.67 5.50 5.38 37.73 10.96 13.64 8.29 23.02 10.68 2.09 12.37 13.44 12.30 7.48 4.06 7.39 1.14 0.24 24.15 -99.99 - 2006 12 3.09 17.51 11.21 11.25 4.15 12.09 4.58 3.02 10.46 22.97 10.01 18.58 26.64 11.77 3.43 4.42 2.87 0.15 0.11 0.08 0.18 0.33 0.00 0.02 0.63 0.23 6.38 2.99 11.64 1.43 19.60 - 2007 1 17.64 7.91 10.16 3.71 2.22 9.77 19.78 15.92 4.82 17.18 15.16 4.63 12.34 1.08 3.49 6.70 19.69 8.91 13.59 6.41 2.19 0.18 1.51 0.16 0.47 0.23 0.83 0.44 0.17 0.92 1.05 - 2007 2 0.56 0.06 0.32 0.40 0.08 0.18 0.06 0.90 2.03 11.00 6.48 1.38 4.68 0.23 7.62 0.24 0.07 0.65 7.89 5.66 13.83 4.01 5.10 1.07 0.78 13.98 19.18 6.92 -99.99 -99.99 -99.99 - 2007 3 0.34 10.03 1.41 7.79 15.95 2.09 0.95 5.56 3.07 0.27 17.86 1.53 0.55 1.29 1.09 3.07 11.11 4.86 0.54 0.05 3.73 0.09 0.00 0.07 0.00 0.04 0.08 4.21 1.41 0.04 0.00 - 2007 4 0.06 0.00 0.00 0.08 0.02 0.12 0.00 0.45 0.95 0.03 0.00 0.00 0.00 0.00 0.28 0.30 0.47 0.47 0.54 1.20 6.78 2.92 11.37 11.27 0.43 0.00 0.00 0.00 0.00 0.00 -99.99 - 2007 5 0.00 0.00 0.00 0.00 5.12 4.35 8.25 1.00 15.25 2.43 5.73 1.46 0.52 3.61 2.76 11.93 5.26 7.84 4.03 0.25 0.60 0.64 1.24 3.58 2.15 1.18 1.71 0.09 1.55 0.63 3.25 - 2007 6 1.41 19.71 12.41 0.03 0.00 0.00 0.18 0.09 0.00 0.00 0.15 8.57 3.63 4.75 9.88 3.43 0.08 2.99 16.86 1.92 3.91 2.17 5.58 7.13 0.08 4.51 2.82 16.06 0.55 9.19 -99.99 - 2007 7 4.84 4.58 2.03 1.15 9.99 5.75 1.40 0.21 1.28 0.13 3.32 0.74 28.22 0.73 8.76 0.44 1.42 4.01 1.97 0.26 3.70 0.17 1.29 4.62 11.02 2.33 1.44 0.15 0.32 0.04 3.29 - 2007 8 2.97 0.05 6.18 9.44 14.91 4.04 1.20 0.00 2.14 6.08 23.11 7.06 3.79 5.61 1.09 0.87 8.31 22.61 0.51 0.03 0.16 0.00 0.15 0.50 0.89 0.10 0.28 0.39 0.90 1.13 0.77 - 2007 9 3.38 0.65 0.00 2.22 0.92 0.09 0.27 0.21 1.35 0.21 0.51 0.08 3.52 0.29 13.02 16.28 0.78 7.29 1.29 6.50 0.54 2.57 14.29 7.41 0.11 0.05 0.27 0.44 0.17 0.08 -99.99 - 2007 10 0.05 1.19 14.70 0.05 0.10 0.17 0.42 17.16 1.13 1.31 1.02 0.78 0.43 1.29 8.07 0.87 0.45 0.07 0.02 0.04 0.02 0.29 0.11 0.02 0.21 7.12 19.66 4.63 4.41 0.42 3.17 - 2007 11 2.65 0.28 0.05 1.72 0.58 0.78 5.34 1.57 4.18 2.97 0.08 4.43 2.68 0.81 0.11 0.30 24.29 3.62 2.86 5.19 10.61 0.05 9.91 2.71 0.42 1.12 5.35 12.90 8.77 13.19 -99.99 - 2007 12 6.64 3.96 5.65 5.16 9.87 6.31 3.26 21.62 0.29 0.13 0.64 0.77 0.14 0.00 0.00 0.05 0.03 0.02 0.14 0.08 3.26 2.56 11.33 0.61 3.98 6.12 17.32 9.71 4.43 2.01 12.67 - 2008 1 8.03 0.10 4.15 12.41 11.26 9.69 3.02 28.08 9.29 8.71 1.88 17.94 12.22 9.05 9.76 7.10 8.36 8.12 3.56 10.42 12.39 10.11 12.53 7.67 9.60 5.94 0.02 7.02 10.02 14.95 10.70 - 2008 2 1.36 8.92 3.79 5.67 4.12 5.58 2.27 0.60 0.06 0.10 0.02 0.17 0.09 0.05 0.06 0.11 0.03 0.10 0.26 6.91 10.27 8.71 4.17 1.46 14.79 4.94 0.85 4.85 18.37 -99.99 -99.99 - 2008 3 6.10 5.10 4.60 0.81 1.31 6.58 8.69 8.90 10.21 7.98 6.75 5.06 8.10 4.08 0.88 0.28 0.09 0.19 2.63 6.92 0.40 3.37 0.41 0.31 2.33 8.41 14.04 3.67 11.92 1.18 9.32 - 2008 4 5.31 0.46 0.38 1.00 1.56 0.79 2.99 0.93 3.75 4.41 5.43 1.43 0.26 2.66 1.18 0.00 1.08 0.44 0.00 0.00 0.00 7.20 5.28 1.61 5.24 0.68 0.35 1.65 3.88 8.06 -99.99 - 2008 5 0.46 1.39 1.89 4.30 0.00 0.00 0.00 0.15 1.16 2.15 0.23 0.00 0.00 0.00 0.00 2.17 0.10 0.00 0.00 0.00 0.00 6.46 0.03 0.00 0.00 0.00 2.62 9.63 0.10 0.62 0.00 - 2008 6 5.26 1.36 0.05 1.95 4.13 0.67 0.05 0.03 0.16 0.16 1.75 0.09 0.30 0.58 0.04 0.75 5.91 6.34 1.46 0.61 30.44 7.94 0.18 9.41 4.18 2.01 7.27 2.59 1.97 4.47 -99.99 - 2008 7 3.49 4.65 5.69 0.35 6.76 5.81 3.13 0.23 24.06 5.14 1.08 0.07 0.72 0.97 0.88 5.88 5.46 6.44 0.97 0.00 0.60 0.55 0.02 0.00 1.85 0.11 0.00 1.93 3.09 16.03 19.57 - 2008 8 10.91 4.62 12.57 0.03 10.30 24.05 1.29 9.05 22.02 4.51 9.93 10.78 3.90 1.96 6.87 9.40 8.64 7.73 6.61 2.47 5.03 0.11 9.71 2.08 3.00 5.59 1.92 0.28 0.10 0.58 10.00 - 2008 9 7.87 4.02 5.29 0.57 11.60 1.72 1.12 0.21 10.69 10.22 3.77 4.07 0.89 9.25 22.98 8.35 0.19 4.52 1.50 0.22 0.23 0.09 0.14 0.00 0.00 0.03 2.90 2.94 13.37 12.34 -99.99 - 2008 10 6.38 2.63 9.65 13.25 0.03 15.04 14.01 0.47 39.46 15.12 0.18 0.26 0.95 2.70 4.71 1.54 2.68 1.90 13.96 6.34 8.18 9.28 18.63 9.76 27.86 6.41 2.53 1.46 2.90 0.33 1.22 - 2008 11 0.02 0.12 0.21 0.05 0.55 2.75 12.40 12.95 2.80 11.47 5.58 2.48 4.53 3.72 0.02 3.50 4.43 0.44 1.81 0.40 0.80 9.48 5.23 0.33 0.55 4.43 6.07 2.26 0.99 0.93 -99.99 - 2008 12 5.06 3.21 17.58 2.84 1.34 0.22 4.81 1.72 0.41 0.12 3.17 33.94 4.67 0.53 3.41 9.83 9.10 9.10 21.72 4.30 1.01 0.45 0.32 0.07 0.02 0.00 0.20 0.09 0.02 0.08 0.00 - 2009 1 1.53 0.04 0.00 2.85 0.02 2.15 0.87 0.09 0.05 16.20 14.24 0.92 0.57 22.70 6.43 8.10 9.01 11.85 7.33 4.97 20.42 1.72 3.59 10.35 2.64 2.54 2.95 0.14 3.83 7.41 0.00 - 2009 2 0.60 6.34 0.57 1.68 1.52 0.34 0.27 4.66 0.04 2.41 0.65 4.05 0.81 0.96 0.05 0.80 0.02 0.59 0.07 1.33 0.87 0.28 0.27 0.57 1.09 6.10 0.07 5.23 -99.99 -99.99 -99.99 - 2009 3 1.76 5.14 16.00 4.47 2.44 2.17 16.32 10.91 4.18 1.17 6.39 0.61 3.87 0.17 2.58 1.96 0.06 0.19 0.03 0.12 0.21 1.92 0.61 8.65 11.94 6.62 1.83 0.00 1.09 0.34 0.07 - 2009 4 0.02 0.07 10.73 2.33 0.00 4.00 19.66 6.95 3.19 0.64 0.29 0.08 0.37 1.40 0.08 0.00 0.00 0.06 0.00 1.27 0.28 5.70 5.05 3.46 2.32 13.16 4.20 1.90 7.43 4.70 -99.99 - 2009 5 7.58 1.89 5.75 5.63 12.23 8.98 8.62 4.26 7.61 0.24 0.00 0.00 0.78 0.66 6.06 3.27 9.74 4.69 2.59 0.91 1.58 2.02 2.22 1.03 4.33 3.58 1.32 0.21 0.03 0.00 0.00 - 2009 6 0.00 0.00 0.00 0.77 1.44 0.88 0.02 0.05 0.03 0.03 0.56 1.17 1.50 6.21 3.66 19.48 8.74 7.67 4.28 2.01 2.33 0.24 0.03 0.00 0.00 0.18 0.46 1.01 0.49 0.21 -99.99 - 2009 7 4.02 6.34 10.57 0.66 3.95 3.69 0.74 0.12 0.00 0.00 15.97 1.25 9.20 2.66 3.23 3.47 3.48 2.58 1.78 1.93 14.80 5.48 5.48 0.20 9.90 4.37 1.74 6.74 0.63 0.31 10.65 - 2009 8 2.99 1.07 8.86 1.94 0.55 0.71 0.93 0.45 15.28 0.17 5.33 0.36 0.88 43.39 4.58 7.58 2.05 10.92 36.83 10.40 4.79 18.13 14.27 1.40 14.83 5.29 9.92 3.22 2.08 20.29 14.36 - 2009 9 5.49 20.06 5.30 3.52 1.17 10.82 11.14 6.72 0.10 0.06 0.10 0.09 0.02 0.00 0.09 0.05 0.00 0.10 1.02 1.44 5.52 1.22 1.24 0.57 0.06 0.15 1.30 1.64 4.07 0.59 -99.99 - 2009 10 4.05 8.86 4.14 2.11 4.12 0.89 0.68 0.14 11.99 0.55 0.09 0.00 2.32 2.21 0.18 0.02 0.46 9.21 1.95 7.57 0.06 0.84 3.19 20.36 3.18 5.26 9.86 0.04 3.18 16.37 13.72 - 2009 11 33.32 11.29 9.45 7.88 2.32 9.94 2.45 0.13 10.91 0.66 6.32 8.39 18.33 7.26 5.91 14.67 12.74 22.40 32.60 1.78 10.06 13.73 12.68 14.35 10.03 7.58 2.34 3.09 0.27 0.12 -99.99 - 2009 12 14.86 10.58 1.53 14.71 10.64 4.79 4.95 6.29 5.07 0.23 0.22 0.11 0.33 0.97 0.32 1.27 0.89 0.93 3.03 12.09 2.32 6.30 0.74 0.27 6.50 10.77 2.06 0.39 3.88 0.92 0.20 - 2010 1 0.86 1.24 0.13 2.32 0.09 0.06 0.03 0.07 0.30 0.59 2.95 0.84 1.67 4.75 15.93 1.58 7.92 0.55 1.90 3.36 16.20 0.33 4.27 0.48 0.00 1.12 1.14 2.21 0.19 0.70 0.52 - 2010 2 8.72 1.23 4.62 2.47 5.58 0.76 0.67 0.18 0.04 0.12 0.12 0.19 1.47 1.74 4.37 2.47 2.27 1.02 0.15 0.73 1.42 0.18 3.19 19.15 16.24 9.31 0.39 0.88 -99.99 -99.99 -99.99 - 2010 3 2.28 0.12 0.04 0.15 0.12 0.02 0.02 0.05 0.05 0.13 0.35 0.10 0.17 0.25 0.83 1.24 0.19 4.54 2.10 0.42 6.62 3.98 1.72 7.94 17.93 9.08 0.67 4.31 29.73 22.70 0.81 - 2010 4 0.42 4.15 2.58 18.79 10.60 9.51 1.16 0.31 0.07 0.00 0.00 0.00 0.00 0.02 0.02 0.02 0.46 1.31 0.76 0.02 0.00 0.00 1.20 2.12 4.33 0.11 11.67 0.75 0.53 0.73 -99.99 - 2010 5 1.76 0.85 0.00 2.26 2.53 0.57 0.06 0.00 0.33 0.00 1.85 0.16 9.47 0.94 4.52 0.15 0.02 4.70 0.49 0.04 0.03 0.15 0.00 0.23 1.20 1.30 2.51 0.59 3.76 0.12 5.88 - 2010 6 1.17 0.02 0.00 0.00 1.29 5.77 5.87 8.86 0.13 0.05 0.16 2.66 0.88 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.02 1.09 0.10 0.00 0.03 0.78 5.92 0.03 9.41 -99.99 - 2010 7 6.20 0.73 15.28 7.81 0.63 5.95 0.70 2.81 9.26 16.29 0.30 0.27 9.62 19.47 15.24 2.23 2.75 5.70 7.65 5.97 17.09 0.38 0.00 1.00 2.63 1.79 0.34 0.53 0.74 2.13 1.29 - 2010 8 3.07 0.68 0.35 0.53 1.32 4.66 0.10 2.96 8.13 2.92 1.11 1.07 0.00 0.05 0.00 15.91 0.24 2.44 8.61 2.45 3.86 5.76 12.11 0.88 0.00 0.12 1.65 3.26 0.01 0.03 0.10 - 2010 9 0.12 0.00 0.03 1.29 0.83 26.58 1.32 1.16 8.42 7.16 1.23 6.92 16.48 7.23 0.87 0.37 0.59 7.92 6.99 0.48 7.26 18.55 3.43 0.02 0.05 0.15 0.31 20.31 6.62 4.73 -99.99 - 2010 10 7.37 10.19 0.34 5.32 5.12 1.36 0.27 0.00 0.00 0.05 0.12 0.02 0.05 0.33 0.19 0.06 1.56 3.96 0.87 6.95 7.04 15.03 0.48 0.12 22.50 5.62 6.03 8.45 16.79 0.29 0.05 - 2010 11 20.19 9.74 15.08 8.50 4.08 8.05 22.44 7.70 0.53 18.84 18.50 2.96 8.56 1.31 3.16 4.17 6.04 6.91 0.59 1.61 1.33 0.90 0.05 0.09 0.32 0.31 0.36 0.84 0.67 0.40 -99.99 - 2010 12 0.26 0.74 5.36 0.68 3.14 4.09 0.32 0.55 2.17 0.75 0.00 0.14 0.09 0.21 3.12 1.49 2.09 0.82 0.19 0.30 0.00 0.18 0.14 0.97 0.39 21.15 8.88 3.87 1.78 0.14 0.95 - 2011 1 0.24 0.09 9.74 9.99 3.83 0.32 2.00 6.46 2.42 22.47 8.15 10.59 6.14 9.44 16.38 0.16 1.54 0.93 0.08 0.18 0.35 0.19 0.61 0.74 0.29 1.05 0.07 0.03 0.17 1.01 10.99 - 2011 2 5.43 12.67 15.77 17.65 4.24 26.83 4.95 8.58 13.65 0.65 13.94 16.41 4.14 4.46 4.20 1.63 4.60 14.66 0.94 2.55 3.26 8.40 3.35 2.86 1.75 2.19 0.18 0.21 -99.99 -99.99 -99.99 - 2011 3 0.02 0.08 0.14 0.36 0.02 0.02 0.02 7.28 9.93 5.73 10.88 9.78 6.09 5.19 8.67 2.27 0.45 0.43 3.48 0.89 0.17 0.39 0.05 0.17 0.05 0.07 0.11 0.02 3.64 16.05 5.31 - 2011 4 5.27 2.01 2.03 14.24 13.71 3.76 0.00 0.02 0.00 2.33 1.76 2.58 2.65 0.35 0.42 0.00 0.02 0.29 0.00 0.02 0.02 1.29 2.86 0.24 0.00 0.00 0.03 0.00 0.03 0.00 -99.99 - 2011 5 0.00 0.00 0.03 2.26 14.76 7.15 18.34 1.41 4.50 3.82 5.64 2.95 3.14 2.56 4.54 7.64 2.67 2.17 3.90 1.18 22.47 16.72 6.53 1.65 6.92 0.49 4.51 6.41 1.04 0.48 0.46 - 2011 6 0.04 0.00 0.02 0.00 11.64 4.59 6.61 5.71 2.54 2.10 1.82 5.15 0.07 2.43 0.60 4.24 24.61 5.90 0.84 2.37 15.58 11.65 1.20 8.04 1.46 2.49 0.26 0.30 2.95 0.22 -99.99 - 2011 7 0.00 0.02 0.02 0.53 6.95 9.92 3.41 6.40 2.85 1.72 0.42 0.91 0.00 0.02 16.34 14.68 9.47 3.20 2.95 1.44 0.81 1.72 0.00 0.03 0.00 0.00 4.87 1.83 0.02 0.59 4.17 - 2011 8 4.26 2.83 6.43 2.57 0.04 14.13 3.00 0.06 23.58 25.66 7.20 8.86 1.54 2.87 5.32 4.37 0.34 0.93 2.34 2.06 0.24 0.00 2.76 10.78 3.25 7.50 1.92 1.17 0.54 1.62 0.08 - 2011 9 4.68 11.99 1.57 9.07 12.56 8.57 7.75 6.32 4.38 5.16 21.00 4.98 7.06 0.09 0.03 12.54 6.54 1.84 7.28 2.12 12.48 1.79 4.35 0.23 12.55 1.84 0.31 0.88 0.10 14.42 -99.99 - 2011 10 19.60 1.83 1.21 4.71 11.61 10.40 3.39 18.50 14.89 6.28 19.76 2.67 0.12 3.74 6.92 4.59 30.63 3.59 1.02 10.77 2.20 9.70 8.16 7.58 5.14 7.06 2.09 8.33 7.32 5.03 6.66 - 2011 11 0.44 3.63 4.20 2.68 0.21 0.21 0.16 0.64 8.46 0.10 6.43 0.25 0.02 0.00 0.08 5.59 12.10 2.07 0.71 1.99 9.07 0.95 10.69 13.38 6.99 15.40 1.47 25.78 17.97 13.40 -99.99 - 2011 12 4.19 5.65 8.68 7.80 8.79 13.30 18.78 11.68 6.05 6.50 3.75 10.39 14.58 3.57 12.34 6.18 2.07 6.41 8.82 7.80 0.96 4.45 3.25 5.64 2.78 4.56 10.93 12.10 6.89 17.49 8.11 - 2012 1 4.21 16.53 9.48 27.33 1.15 3.86 1.03 2.17 0.07 1.38 2.43 0.26 0.19 0.00 0.09 0.07 5.89 5.08 14.52 13.92 7.28 2.88 12.20 5.20 12.63 7.30 3.32 0.72 2.75 0.02 0.00 - 2012 2 0.00 0.02 3.61 12.29 0.69 0.08 0.16 8.56 4.76 6.85 1.65 0.42 0.07 0.08 0.25 1.13 10.60 1.61 4.05 10.74 11.73 3.47 1.44 0.32 1.01 5.04 2.01 0.25 0.02 -99.99 -99.99 - 2012 3 0.90 2.03 5.56 0.50 0.12 13.74 4.77 0.53 2.19 0.39 0.17 0.08 0.00 0.00 5.60 7.45 0.17 0.36 2.93 0.00 0.00 0.07 0.02 0.04 0.00 0.03 0.00 0.03 0.00 0.06 0.07 - 2012 4 0.13 7.02 1.45 0.15 0.38 1.37 0.60 3.83 10.91 2.96 2.77 0.51 0.64 0.16 0.00 16.97 3.21 2.33 1.01 1.88 2.89 4.46 0.77 1.97 3.39 1.76 1.15 0.03 5.48 0.13 -99.99 - 2012 5 0.42 0.00 1.01 0.12 0.00 0.60 9.46 0.68 8.54 14.59 1.57 0.26 13.84 2.29 1.42 4.67 5.66 1.16 0.03 0.00 0.03 0.00 0.05 0.00 0.00 0.03 0.00 0.03 0.07 6.96 1.90 - 2012 6 0.05 0.03 0.00 0.00 10.62 0.82 13.48 5.79 2.36 3.11 2.58 0.15 0.18 7.57 26.95 14.87 0.75 2.42 0.05 0.27 24.90 27.94 8.69 0.25 0.00 7.63 11.45 8.88 7.70 3.65 -99.99 - 2012 7 4.98 5.04 8.38 8.05 4.47 9.28 2.63 4.73 3.58 4.71 1.06 0.00 2.32 0.50 1.07 0.24 22.14 11.49 2.24 0.05 0.56 6.61 16.23 2.37 0.10 0.54 2.59 4.27 3.40 1.29 16.77 - 2012 8 5.49 0.12 2.88 2.68 6.21 3.02 0.06 0.00 0.00 0.03 0.00 6.11 3.81 0.66 21.18 23.05 4.36 0.19 2.35 2.72 7.45 4.16 4.32 3.34 5.76 6.93 17.95 5.51 6.85 0.07 3.73 - 2012 9 3.14 0.10 2.19 0.11 0.07 3.09 0.94 0.00 8.31 10.04 8.18 0.73 4.02 0.48 4.82 6.42 8.25 4.41 9.26 19.11 0.38 0.06 3.09 25.11 4.60 1.85 2.56 4.81 20.28 2.39 -99.99 - 2012 10 6.74 17.54 3.24 7.77 1.54 0.13 0.13 0.03 0.04 4.29 39.86 1.66 7.64 2.10 7.71 4.75 22.20 13.25 2.90 0.14 0.15 0.21 0.43 0.02 0.11 0.04 7.74 5.82 1.41 10.74 12.24 - 2012 11 5.77 4.39 2.04 1.33 1.56 0.86 4.39 7.46 5.70 3.37 3.99 9.26 12.35 2.44 1.73 4.39 3.11 31.22 5.75 2.84 8.39 19.36 1.56 7.63 5.05 0.71 0.02 0.12 0.35 4.86 -99.99 - 2012 12 0.26 10.64 6.68 4.23 3.17 16.29 0.17 2.66 0.02 0.03 0.65 2.73 0.30 15.08 0.37 7.09 0.56 0.13 17.22 27.94 4.48 26.56 3.25 11.65 4.57 19.66 10.19 4.95 5.55 17.88 5.07 - 2013 1 7.26 2.75 0.94 0.49 3.36 10.35 24.26 0.35 0.12 1.69 1.36 1.79 8.14 1.60 0.92 1.90 1.31 0.43 0.29 0.33 4.81 0.53 0.36 3.29 18.70 25.41 2.90 14.24 11.26 13.87 8.81 - 2013 2 0.27 5.50 4.93 8.05 4.26 0.37 1.76 4.47 7.78 6.70 0.00 6.48 25.13 0.65 0.43 0.74 0.00 0.09 0.24 0.00 0.02 0.00 0.12 1.08 0.10 0.10 0.33 0.02 -99.99 -99.99 -99.99 - 2013 3 0.02 0.12 0.00 0.10 0.00 4.58 2.95 2.93 3.02 0.70 0.51 0.07 0.65 5.66 3.35 6.85 7.77 1.43 2.78 0.24 1.44 7.58 0.70 0.06 0.03 0.23 0.47 0.10 0.04 0.08 0.00 - 2013 4 0.00 0.00 0.03 0.38 0.04 0.00 0.00 0.00 0.02 0.95 6.94 1.53 13.49 2.91 6.21 10.35 15.20 1.61 0.00 4.31 1.49 0.91 8.42 3.77 2.29 3.04 1.22 3.57 0.19 0.31 -99.99 - 2013 5 0.22 3.10 10.87 0.45 1.14 0.36 1.69 5.24 6.70 8.11 2.85 9.35 10.44 0.88 0.95 0.25 9.48 21.95 0.00 1.28 0.02 0.48 0.49 0.02 0.26 15.41 9.43 1.15 1.05 0.52 0.16 - 2013 6 0.30 0.37 0.00 0.05 0.16 0.02 0.00 0.18 0.03 0.51 15.01 1.15 0.48 15.87 6.12 0.26 0.02 0.00 0.04 2.20 6.11 5.59 0.34 0.00 0.11 3.08 7.67 3.16 0.46 2.99 -99.99 - 2013 7 1.38 13.23 6.56 0.47 0.05 0.87 0.00 0.00 0.00 0.00 0.00 0.00 0.04 0.02 0.22 0.02 0.05 0.02 0.00 0.00 0.00 5.55 8.99 7.86 7.68 0.23 15.13 4.11 6.83 4.16 14.74 - 2013 8 3.86 0.93 3.03 1.02 0.20 1.56 0.15 10.98 0.38 2.26 2.99 0.93 0.00 13.94 10.51 2.40 6.61 0.97 0.14 5.75 1.70 0.08 1.75 0.39 0.08 0.23 0.13 0.29 0.24 1.04 0.67 - 2013 9 0.84 0.66 0.04 0.42 0.26 25.14 7.62 2.10 1.00 0.28 4.99 6.68 1.41 12.22 14.38 4.99 2.95 10.93 3.22 0.40 0.19 0.16 0.35 0.66 0.14 2.15 0.31 0.00 0.00 1.66 -99.99 - 2013 10 5.63 17.56 19.28 1.50 0.69 3.50 4.92 1.64 0.98 0.00 0.00 0.00 0.66 0.75 0.13 15.45 3.04 23.19 12.84 5.41 21.21 7.79 2.77 8.41 7.00 11.45 8.47 5.73 2.31 8.51 5.38 - 2013 11 1.33 27.77 1.11 1.50 5.58 3.04 3.95 2.81 1.93 11.03 2.13 0.80 6.71 0.41 0.11 0.36 9.08 3.82 13.15 0.27 0.05 0.35 0.14 0.05 0.28 1.25 0.16 3.29 0.98 0.51 -99.99 - 2013 12 0.22 0.32 2.83 15.68 2.43 3.74 3.12 6.94 0.00 0.58 11.08 6.57 2.48 14.28 8.51 1.67 4.59 17.14 3.77 17.59 11.11 4.11 18.86 7.48 1.58 13.39 9.61 3.45 37.27 16.56 5.17 - 2014 1 15.55 7.22 8.02 5.12 12.23 5.06 3.51 0.89 1.65 9.54 0.47 4.69 8.82 16.01 9.62 5.06 2.76 13.07 3.18 2.82 12.93 8.08 3.43 12.63 26.69 13.81 9.03 4.01 0.62 0.41 14.07 - 2014 2 11.20 0.65 9.04 5.27 4.00 1.56 9.70 9.76 2.92 7.49 3.35 21.29 3.72 23.74 2.85 5.05 6.96 5.16 13.36 4.99 3.64 8.33 15.35 6.38 5.72 16.19 2.70 1.14 -99.99 -99.99 -99.99 - 2014 3 10.71 7.68 1.35 0.37 9.20 26.19 4.77 1.13 5.30 0.18 0.06 0.25 0.25 0.33 0.22 0.51 3.03 2.68 4.69 14.84 8.22 3.45 0.36 1.42 3.48 0.75 2.04 5.25 0.66 0.04 8.62 - 2014 4 1.68 2.71 11.29 2.45 7.28 3.64 4.86 2.31 0.83 0.61 1.61 0.54 2.06 0.08 0.02 1.03 0.26 0.02 0.06 0.00 0.27 3.95 2.03 0.18 8.03 0.59 2.15 0.05 0.07 8.65 -99.99 - 2014 5 0.82 0.00 2.45 2.12 5.40 6.21 12.90 5.87 8.33 6.52 6.52 4.03 0.39 0.30 0.00 0.52 8.11 3.76 18.49 4.04 0.06 2.19 0.47 4.43 8.63 2.23 1.60 4.81 0.00 0.00 0.07 - 2014 6 6.72 2.00 5.05 7.69 0.62 1.57 17.35 1.48 5.56 2.58 0.02 0.81 0.75 0.88 0.15 0.11 0.05 0.03 0.00 0.00 0.03 0.11 0.47 2.51 7.34 0.39 0.00 0.31 0.02 0.03 -99.99 - 2014 7 0.07 1.62 4.74 8.89 2.46 1.23 2.98 1.19 0.00 0.15 0.46 13.96 0.17 8.33 4.11 4.60 0.01 0.26 12.89 0.08 0.02 0.00 0.03 0.00 0.00 6.43 6.16 0.33 1.24 1.31 2.80 - 2014 8 3.38 22.81 4.90 0.00 8.74 2.56 1.40 7.18 8.29 17.68 5.17 8.20 0.97 2.57 0.59 3.15 1.73 0.93 0.36 4.41 1.91 0.65 1.39 0.03 0.00 0.03 5.84 8.16 6.49 0.17 4.10 - 2014 9 0.00 0.00 0.00 0.52 2.26 0.10 0.03 0.00 0.00 0.02 0.03 0.05 0.00 0.12 2.36 0.07 0.06 0.19 0.29 0.12 0.00 2.06 4.51 0.87 1.82 0.10 0.84 1.12 0.03 3.48 -99.99 - 2014 10 0.27 2.91 36.67 1.22 18.65 3.23 3.32 8.60 6.75 3.90 1.34 0.33 0.49 0.03 4.87 15.18 8.01 6.34 4.45 10.34 3.03 2.87 3.73 3.47 2.91 11.23 19.85 11.08 4.65 1.32 7.64 - 2014 11 16.75 7.19 5.09 1.02 9.24 36.41 1.81 12.25 1.12 5.06 11.03 4.63 8.67 5.41 0.06 2.86 0.51 0.50 0.64 0.11 18.14 2.94 0.97 1.70 1.56 0.25 0.27 0.08 1.03 0.16 -99.99 - 2014 12 3.74 0.48 0.25 2.52 2.29 17.36 11.18 1.14 17.82 10.96 4.85 2.34 5.07 3.55 2.60 12.31 6.19 8.73 7.94 2.19 33.45 21.93 6.12 6.01 2.35 5.25 0.46 0.18 0.30 2.07 9.83 - 2015 1 19.05 7.62 0.00 0.24 9.30 8.97 16.53 9.31 19.07 11.05 16.66 5.24 3.88 24.27 8.67 9.76 3.24 0.26 0.31 6.09 0.22 1.17 9.99 1.68 7.41 1.85 8.68 7.39 7.16 1.63 0.28 - 2015 2 0.02 0.37 0.28 0.02 0.13 0.10 0.13 0.41 0.27 0.00 0.25 1.27 3.31 0.00 16.95 1.81 2.32 6.94 3.99 2.91 2.84 22.10 9.42 6.50 13.51 3.12 3.49 20.18 -99.99 -99.99 -99.99 - 2015 3 5.94 4.58 6.49 0.12 1.41 6.67 8.42 1.26 5.95 0.89 9.85 23.19 0.19 0.00 0.12 0.24 0.09 0.04 0.52 0.24 0.00 0.68 1.37 1.21 14.97 0.60 16.42 9.83 5.97 15.86 6.29 - 2015 4 2.66 7.28 1.12 0.22 0.22 0.04 0.05 0.02 0.06 6.85 6.94 2.47 1.17 6.00 0.16 0.00 0.06 0.02 0.21 0.02 0.02 0.04 0.00 5.09 0.04 0.10 6.44 11.16 4.31 0.58 -99.99 - 2015 5 0.00 19.90 7.34 4.35 18.24 1.94 0.24 6.99 3.86 11.89 1.11 0.72 0.00 0.00 5.48 3.89 11.87 3.24 0.94 0.21 1.08 0.11 6.27 0.50 0.16 0.13 13.73 5.06 0.75 12.40 2.69 - 2015 6 19.53 1.57 0.53 3.29 3.38 2.85 0.05 0.00 0.00 0.00 0.00 0.02 0.42 0.02 0.76 4.46 0.37 0.85 1.96 2.29 2.63 0.12 0.07 1.75 7.52 4.03 16.25 1.43 1.25 0.02 -99.99 - 2015 7 1.39 0.78 15.84 2.48 3.97 14.86 4.21 0.53 4.63 2.40 7.62 4.38 1.91 0.07 0.03 14.79 5.37 10.56 1.51 3.86 0.78 1.11 0.91 0.17 0.65 7.42 12.51 5.01 0.97 0.43 8.87 - 2015 8 2.51 4.55 5.71 2.41 16.28 0.05 0.08 2.16 1.01 4.75 0.05 0.00 2.77 2.12 0.12 0.34 0.00 0.36 9.90 3.08 0.50 14.29 6.13 0.06 15.50 7.68 2.36 1.65 1.05 0.24 3.94 - 2015 9 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 - 2015 10 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 - 2015 11 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 - 2015 12 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 -99.99 diff --git a/data/ccpp_data.npz b/data/ccpp_data.npz deleted file mode 100644 index a507ba2..0000000 Binary files a/data/ccpp_data.npz and /dev/null differ diff --git a/data_providers.py b/data_providers.py new file mode 100644 index 0000000..03f7d94 --- /dev/null +++ b/data_providers.py @@ -0,0 +1,740 @@ +# -*- coding: utf-8 -*- +"""Data providers. + +This module provides classes for loading datasets and iterating over batches of +data points. +""" + +import os + +import numpy as np +DEFAULT_SEED = 22012018 + + +class DataProvider(object): + """Generic data provider.""" + + def __init__(self, inputs, targets, batch_size, max_num_batches=-1, + random_sampling=True, rng=None): + """Create a new data provider object. + + Args: + inputs (ndarray): Array of data input features of shape + (num_data, input_dim). + targets (ndarray): Array of data output targets of shape + (num_data, output_dim) or (num_data,) if output_dim == 1. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + self.inputs = inputs + self.targets = targets + + if batch_size < 1: + raise ValueError('batch_size must be >= 1') + + self._batch_size = batch_size + + if max_num_batches == 0 or max_num_batches < -1: + raise ValueError('max_num_batches must be -1 or > 0') + + self._max_num_batches = max_num_batches + self._update_num_batches() + self.random_sampling = random_sampling + self._current_order = np.arange(inputs.shape[0]) + + if rng is None: + rng = np.random.RandomState(DEFAULT_SEED) + + self.rng = rng + self.new_epoch() + + @property + def batch_size(self): + """Number of data points to include in each batch.""" + return self._batch_size + + @batch_size.setter + def batch_size(self, value): + if value < 1: + raise ValueError('batch_size must be >= 1') + self._batch_size = value + self._update_num_batches() + + @property + def max_num_batches(self): + """Maximum number of batches to iterate over in an epoch.""" + return self._max_num_batches + + @max_num_batches.setter + def max_num_batches(self, value): + if value == 0 or value < -1: + raise ValueError('max_num_batches must be -1 or > 0') + self._max_num_batches = value + self._update_num_batches() + + def _update_num_batches(self): + """Updates number of batches to iterate over.""" + # maximum possible number of batches is equal to number of whole times + # batch_size divides in to the number of data points which can be + # found using integer division + possible_num_batches = self.inputs.shape[0] // self.batch_size + if self.max_num_batches == -1: + self.num_batches = possible_num_batches + else: + self.num_batches = min(self.max_num_batches, possible_num_batches) + + def __iter__(self): + """Implements Python iterator interface. + + This should return an object implementing a `next` method which steps + through a sequence returning one element at a time and raising + `StopIteration` when at the end of the sequence. Here the object + returned is the DataProvider itself. + """ + return self + + def new_epoch(self): + """Starts a new epoch (pass through data), possibly shuffling first.""" + self._curr_batch = 0 + + def __next__(self): + return self.next() + + def reset(self): + """Resets the provider to the initial state.""" + inv_perm = np.argsort(self._current_order) + self._current_order = self._current_order[inv_perm] + self.inputs = self.inputs[inv_perm] + self.targets = self.targets[inv_perm] + self.new_epoch() + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + if self._curr_batch + 1 > self.num_batches: + # no more batches in current iteration through data set so start + # new epoch ready for another pass and indicate iteration is at end + self.new_epoch() + raise StopIteration() + # create an index slice corresponding to current batch number + if self.random_sampling: + batch_slice = self.rng.choice(self.inputs.shape[0], size=self.batch_size, replace=False) + + else: + batch_slice = slice(self._curr_batch * self.batch_size, + (self._curr_batch + 1) * self.batch_size) + + inputs_batch = self.inputs[batch_slice] + targets_batch = self.targets[batch_slice] + self._curr_batch += 1 + return inputs_batch, targets_batch + +class MNISTDataProvider(DataProvider): + """Data provider for MNIST handwritten digit images.""" + + def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, + random_sampling=False, rng=None): + """Create a new MNIST data provider object. + + Args: + which_set: One of 'train', 'valid' or 'eval'. Determines which + portion of the MNIST data this object should provide. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + # check a valid which_set was provided + assert which_set in ['train', 'valid', 'test'], ( + 'Expected which_set to be either train, valid or eval. ' + 'Got {0}'.format(which_set) + ) + self.which_set = which_set + self.num_classes = 10 + # construct path to data using os.path.join to ensure the correct path + # separator for the current platform / OS is used + # MLP_DATA_DIR environment variable should point to the data directory + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'mnist-{0}.npz'.format(which_set)) + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + # load data from compressed numpy file + loaded = np.load(data_path) + inputs, targets = loaded['inputs'], loaded['targets'] + inputs = inputs.astype(np.float32) + # pass the loaded data to the parent class __init__ + super(MNISTDataProvider, self).__init__( + inputs, targets, batch_size, max_num_batches, random_sampling, rng) + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + inputs_batch, targets_batch = super(MNISTDataProvider, self).next() + return inputs_batch, self.to_one_of_k(targets_batch) + + def to_one_of_k(self, int_targets): + """Converts integer coded class target to 1 of K coded targets. + + Args: + int_targets (ndarray): Array of integer coded class targets (i.e. + where an integer from 0 to `num_classes` - 1 is used to + indicate which is the correct class). This should be of shape + (num_data,). + + Returns: + Array of 1 of K coded targets i.e. an array of shape + (num_data, num_classes) where for each row all elements are equal + to zero except for the column corresponding to the correct class + which is equal to one. + """ + one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) + one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 + return one_of_k_targets + +class EMNISTDataProvider(DataProvider): + """Data provider for EMNIST handwritten digit images.""" + + def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, + random_sampling=False, rng=None, flatten=False, one_hot=False): + """Create a new EMNIST data provider object. + + Args: + which_set: One of 'train', 'valid' or 'eval'. Determines which + portion of the EMNIST data this object should provide. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + # check a valid which_set was provided + assert which_set in ['train', 'valid', 'test'], ( + 'Expected which_set to be either train, valid or eval. ' + 'Got {0}'.format(which_set) + ) + self.one_hot = one_hot + self.which_set = which_set + self.num_classes = 47 + # construct path to data using os.path.join to ensure the correct path + # separator for the current platform / OS is used + # MLP_DATA_DIR environment variable should point to the data directory + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'emnist-{0}.npz'.format(which_set)) + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + # load data from compressed numpy file + loaded = np.load(data_path) + + inputs, targets = loaded['inputs'], loaded['targets'] + inputs = inputs.astype(np.float32) + if flatten: + inputs = np.reshape(inputs, newshape=(-1, 28*28)) + else: + inputs = np.expand_dims(inputs, axis=3) + inputs = inputs / 255.0 + + # pass the loaded data to the parent class __init__ + super(EMNISTDataProvider, self).__init__( + inputs, targets, batch_size, max_num_batches, random_sampling, rng) + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + inputs_batch, targets_batch = super(EMNISTDataProvider, self).next() + if self.one_hot: + return inputs_batch, self.to_one_of_k(targets_batch) + else: + return inputs_batch, targets_batch + + def to_one_of_k(self, int_targets): + """Converts integer coded class target to 1 of K coded targets. + + Args: + int_targets (ndarray): Array of integer coded class targets (i.e. + where an integer from 0 to `num_classes` - 1 is used to + indicate which is the correct class). This should be of shape + (num_data,). + + Returns: + Array of 1 of K coded targets i.e. an array of shape + (num_data, num_classes) where for each row all elements are equal + to zero except for the column corresponding to the correct class + which is equal to one. + """ + one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) + one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 + return one_of_k_targets + +class CIFAR10DataProvider(DataProvider): + """Data provider for CIFAR-10 object images.""" + + def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, + random_sampling=False, rng=None, flatten=False, one_hot=False): + """Create a new EMNIST data provider object. + + Args: + which_set: One of 'train', 'valid' or 'eval'. Determines which + portion of the EMNIST data this object should provide. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + # check a valid which_set was provided + assert which_set in ['train', 'valid', 'test'], ( + 'Expected which_set to be either train, valid or eval. ' + 'Got {0}'.format(which_set) + ) + self.one_hot = one_hot + self.which_set = which_set + self.num_classes = 10 + # construct path to data using os.path.join to ensure the correct path + # separator for the current platform / OS is used + # MLP_DATA_DIR environment variable should point to the data directory + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'cifar10-{0}.npz'.format(which_set)) + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + # load data from compressed numpy file + loaded = np.load(data_path) + + inputs, targets = loaded['inputs'], loaded['targets'] + inputs = inputs.astype(np.float32) + if flatten: + inputs = np.reshape(inputs, newshape=(-1, 32*32*3)) + else: + inputs = np.reshape(inputs, newshape=(-1, 3, 32, 32)) + inputs = np.transpose(inputs, axes=(0, 2, 3, 1)) + + inputs = inputs / 255.0 + # label map gives strings corresponding to integer label targets + + + # pass the loaded data to the parent class __init__ + super(CIFAR10DataProvider, self).__init__( + inputs, targets, batch_size, max_num_batches, random_sampling, rng) + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + inputs_batch, targets_batch = super(CIFAR10DataProvider, self).next() + if self.one_hot: + return inputs_batch, self.to_one_of_k(targets_batch) + else: + return inputs_batch, targets_batch + + def to_one_of_k(self, int_targets): + """Converts integer coded class target to 1 of K coded targets. + + Args: + int_targets (ndarray): Array of integer coded class targets (i.e. + where an integer from 0 to `num_classes` - 1 is used to + indicate which is the correct class). This should be of shape + (num_data,). + + Returns: + Array of 1 of K coded targets i.e. an array of shape + (num_data, num_classes) where for each row all elements are equal + to zero except for the column corresponding to the correct class + which is equal to one. + """ + one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) + one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 + return one_of_k_targets + + + +class CIFAR100DataProvider(DataProvider): + """Data provider for CIFAR-100 object images.""" + + def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, + random_sampling=False, rng=None, flatten=False, one_hot=False): + """Create a new EMNIST data provider object. + + Args: + which_set: One of 'train', 'valid' or 'eval'. Determines which + portion of the EMNIST data this object should provide. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + # check a valid which_set was provided + assert which_set in ['train', 'valid', 'test'], ( + 'Expected which_set to be either train, valid or eval. ' + 'Got {0}'.format(which_set) + ) + self.one_hot = one_hot + self.which_set = which_set + self.num_classes = 100 + # construct path to data using os.path.join to ensure the correct path + # separator for the current platform / OS is used + # MLP_DATA_DIR environment variable should point to the data directory + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'cifar100-{0}.npz'.format(which_set)) + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + # load data from compressed numpy file + loaded = np.load(data_path) + + inputs, targets = loaded['inputs'], loaded['targets'] + inputs = inputs.astype(np.float32) + if flatten: + inputs = np.reshape(inputs, newshape=(-1, 32*32*3)) + else: + inputs = np.reshape(inputs, newshape=(-1, 3, 32, 32)) + inputs = np.transpose(inputs, axes=(0, 2, 3, 1)) + inputs = inputs / 255.0 + + # pass the loaded data to the parent class __init__ + super(CIFAR100DataProvider, self).__init__( + inputs, targets, batch_size, max_num_batches, random_sampling, rng) + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + inputs_batch, targets_batch = super(CIFAR100DataProvider, self).next() + if self.one_hot: + return inputs_batch, self.to_one_of_k(targets_batch) + else: + return inputs_batch, targets_batch + + def to_one_of_k(self, int_targets): + """Converts integer coded class target to 1 of K coded targets. + + Args: + int_targets (ndarray): Array of integer coded class targets (i.e. + where an integer from 0 to `num_classes` - 1 is used to + indicate which is the correct class). This should be of shape + (num_data,). + + Returns: + Array of 1 of K coded targets i.e. an array of shape + (num_data, num_classes) where for each row all elements are equal + to zero except for the column corresponding to the correct class + which is equal to one. + """ + one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) + one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 + return one_of_k_targets + + +class MSD10GenreDataProvider(DataProvider): + """Data provider for Million Song Dataset 10-genre classification task.""" + + def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, + random_sampling=False, rng=None, one_hot=False, flatten=True): + """Create a new EMNIST data provider object. + + Args: + which_set: One of 'train', 'valid' or 'eval'. Determines which + portion of the EMNIST data this object should provide. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + # check a valid which_set was provided + assert which_set in ['train', 'valid', 'test'], ( + 'Expected which_set to be either train, valid or eval. ' + 'Got {0}'.format(which_set) + ) + self.one_hot = one_hot + self.which_set = which_set + self.num_classes = 10 + # construct path to data using os.path.join to ensure the correct path + # separator for the current platform / OS is used + # MLP_DATA_DIR environment variable should point to the data directory + if which_set is not "test": + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'msd10-{0}.npz'.format(which_set)) + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + # load data from compressed numpy file + loaded = np.load(data_path) + + inputs, target = loaded['inputs'], loaded['targets'] + else: + input_data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'msd-10-genre-test-inputs.npz') + assert os.path.isfile(input_data_path), ( + 'Data file does not exist at expected path: ' + input_data_path + ) + target_data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'msd-10-genre-test-targets.npz') + assert os.path.isfile(input_data_path), ( + 'Data file does not exist at expected path: ' + input_data_path + ) + # load data from compressed numpy file + inputs = np.load(input_data_path)['inputs'] + target = np.load(target_data_path)['targets'] + if flatten: + inputs = inputs.reshape((-1, 120*25)) + #inputs, targets = loaded['inputs'], loaded['targets'] + + + # label map gives strings corresponding to integer label targets + + # pass the loaded data to the parent class __init__ + super(MSD10GenreDataProvider, self).__init__( + inputs, target, batch_size, max_num_batches, random_sampling, rng) + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + inputs_batch, targets_batch = super(MSD10GenreDataProvider, self).next() + if self.one_hot: + return inputs_batch, self.to_one_of_k(targets_batch) + else: + return inputs_batch, targets_batch + + def to_one_of_k(self, int_targets): + """Converts integer coded class target to 1 of K coded targets. + + Args: + int_targets (ndarray): Array of integer coded class targets (i.e. + where an integer from 0 to `num_classes` - 1 is used to + indicate which is the correct class). This should be of shape + (num_data,). + + Returns: + Array of 1 of K coded targets i.e. an array of shape + (num_data, num_classes) where for each row all elements are equal + to zero except for the column corresponding to the correct class + which is equal to one. + """ + one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) + one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 + return one_of_k_targets + +class MSD25GenreDataProvider(DataProvider): + """Data provider for Million Song Dataset 25-genre classification task.""" + + def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, + random_sampling=False, rng=None, one_hot=False, flatten=True): + """Create a new EMNIST data provider object. + + Args: + which_set: One of 'train', 'valid' or 'eval'. Determines which + portion of the EMNIST data this object should provide. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + # check a valid which_set was provided + assert which_set in ['train', 'valid', 'test'], ( + 'Expected which_set to be either train or valid. ' + 'Got {0}'.format(which_set) + ) + self.one_hot = one_hot + self.which_set = which_set + self.num_classes = 25 + # construct path to data using os.path.join to ensure the correct path + # separator for the current platform / OS is used + # MLP_DATA_DIR environment variable should point to the data directory + + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'msd10-{0}.npz'.format(which_set)) + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + # load data from compressed numpy file + loaded = np.load(data_path) + + inputs, target = loaded['inputs'], loaded['targets'] + + if flatten: + inputs = inputs.reshape((-1, 120*25)) + #inputs, target + # pass the loaded data to the parent class __init__ + super(MSD25GenreDataProvider, self).__init__( + inputs, target, batch_size, max_num_batches, random_sampling, rng) + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + inputs_batch, targets_batch = super(MSD25GenreDataProvider, self).next() + if self.one_hot: + return inputs_batch, self.to_one_of_k(targets_batch) + else: + return inputs_batch, targets_batch + + def to_one_of_k(self, int_targets): + """Converts integer coded class target to 1 of K coded targets. + + Args: + int_targets (ndarray): Array of integer coded class targets (i.e. + where an integer from 0 to `num_classes` - 1 is used to + indicate which is the correct class). This should be of shape + (num_data,). + + Returns: + Array of 1 of K coded targets i.e. an array of shape + (num_data, num_classes) where for each row all elements are equal + to zero except for the column corresponding to the correct class + which is equal to one. + """ + one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) + one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 + return one_of_k_targets + + + +class MetOfficeDataProvider(DataProvider): + """South Scotland Met Office weather data provider.""" + + def __init__(self, window_size, batch_size=10, max_num_batches=-1, + random_sampling=False, rng=None): + """Create a new Met Office data provider object. + + Args: + window_size (int): Size of windows to split weather time series + data into. The constructed input features will be the first + `window_size - 1` entries in each window and the target outputs + the last entry in each window. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'HadSSP_daily_qc.txt') + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + raw = np.loadtxt(data_path, skiprows=3, usecols=range(2, 32)) + assert window_size > 1, 'window_size must be at least 2.' + self.window_size = window_size + # filter out all missing datapoints and flatten to a vector + filtered = raw[raw >= 0].flatten() + # normalise data to zero mean, unit standard deviation + mean = np.mean(filtered) + std = np.std(filtered) + normalised = (filtered - mean) / std + # create a view on to array corresponding to a rolling window + shape = (normalised.shape[-1] - self.window_size + 1, self.window_size) + strides = normalised.strides + (normalised.strides[-1],) + windowed = np.lib.stride_tricks.as_strided( + normalised, shape=shape, strides=strides) + # inputs are first (window_size - 1) entries in windows + inputs = windowed[:, :-1] + # targets are last entry in windows + targets = windowed[:, -1] + super(MetOfficeDataProvider, self).__init__( + inputs, targets, batch_size, max_num_batches, random_sampling, rng) + +class CCPPDataProvider(DataProvider): + + def __init__(self, which_set='train', input_dims=None, batch_size=10, + max_num_batches=-1, random_sampling=False, rng=None): + """Create a new Combined Cycle Power Plant data provider object. + + Args: + which_set: One of 'train' or 'valid'. Determines which portion of + data this object should provide. + input_dims: Which of the four input dimension to use. If `None` all + are used. If an iterable of integers are provided (consisting + of a subset of {0, 1, 2, 3}) then only the corresponding + input dimensions are included. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + """ + data_path = os.path.join( + os.environ['MLP_DATA_DIR'], 'ccpp_data.npz') + assert os.path.isfile(data_path), ( + 'Data file does not exist at expected path: ' + data_path + ) + # check a valid which_set was provided + assert which_set in ['train', 'valid'], ( + 'Expected which_set to be either train or valid ' + 'Got {0}'.format(which_set) + ) + # check input_dims are valid + if not input_dims is not None: + input_dims = set(input_dims) + assert input_dims.issubset({0, 1, 2, 3}), ( + 'input_dims should be a subset of {0, 1, 2, 3}' + ) + loaded = np.load(data_path) + inputs = loaded[which_set + '_inputs'] + if input_dims is not None: + inputs = inputs[:, input_dims] + targets = loaded[which_set + '_targets'] + super(CCPPDataProvider, self).__init__( + inputs, targets, batch_size, max_num_batches, random_sampling, rng) + + +class AugmentedMNISTDataProvider(MNISTDataProvider): + """Data provider for MNIST dataset which randomly transforms images.""" + + def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, + random_sampling=False, rng=None, transformer=None): + """Create a new augmented MNIST data provider object. + + Args: + which_set: One of 'train', 'valid' or 'test'. Determines which + portion of the MNIST data this object should provide. + batch_size (int): Number of data points to include in each batch. + max_num_batches (int): Maximum number of batches to iterate over + in an epoch. If `max_num_batches * batch_size > num_data` then + only as many batches as the data can be split into will be + used. If set to -1 all of the data will be used. + random_sampling (bool): Whether to randomly permute the order of + the data before each epoch. + rng (RandomState): A seeded random number generator. + transformer: Function which takes an `inputs` array of shape + (batch_size, input_dim) corresponding to a batch of input + images and a `rng` random number generator object (i.e. a + call signature `transformer(inputs, rng)`) and applies a + potentiall random set of transformations to some / all of the + input images as each new batch is returned when iterating over + the data provider. + """ + super(AugmentedMNISTDataProvider, self).__init__( + which_set, batch_size, max_num_batches, random_sampling, rng) + self.transformer = transformer + + def next(self): + """Returns next data batch or raises `StopIteration` if at end.""" + inputs_batch, targets_batch = super( + AugmentedMNISTDataProvider, self).next() + transformed_inputs_batch = self.transformer(inputs_batch, self.rng) + return transformed_inputs_batch, targets_batch diff --git a/emnist_network_trainer.py b/emnist_network_trainer.py new file mode 100644 index 0000000..b848c29 --- /dev/null +++ b/emnist_network_trainer.py @@ -0,0 +1,181 @@ +import argparse +import numpy as np +import tensorflow as tf +import tqdm +from data_providers import EMNISTDataProvider +from network_builder import ClassifierNetworkGraph +from utils.parser_utils import ParserClass +from utils.storage import build_experiment_folder, save_statistics, get_best_validation_model_statistics + +tf.reset_default_graph() # resets any previous graphs to clear memory +parser = argparse.ArgumentParser(description='Welcome to CNN experiments script') # generates an argument parser +parser_extractor = ParserClass(parser=parser) # creates a parser class to process the parsed input + +batch_size, seed, epochs, logs_path, continue_from_epoch, tensorboard_enable, batch_norm, \ +strided_dim_reduction, experiment_prefix, dropout_rate_value = parser_extractor.get_argument_variables() +# returns a list of objects that contain +# our parsed input + +experiment_name = "experiment_{}_batch_size_{}_bn_{}_mp_{}".format(experiment_prefix, + batch_size, batch_norm, + strided_dim_reduction) +# generate experiment name + +rng = np.random.RandomState(seed=seed) # set seed + +train_data = EMNISTDataProvider(which_set="train", batch_size=batch_size, rng=rng, random_sampling=True) +val_data = EMNISTDataProvider(which_set="valid", batch_size=batch_size, rng=rng) +test_data = EMNISTDataProvider(which_set="test", batch_size=batch_size, rng=rng) +# setup our data providers + +print("Running {}".format(experiment_name)) +print("Starting from epoch {}".format(continue_from_epoch)) + +saved_models_filepath, logs_filepath = build_experiment_folder(experiment_name, logs_path) # generate experiment dir + +# Placeholder setup +data_inputs = tf.placeholder(tf.float32, [batch_size, train_data.inputs.shape[1], train_data.inputs.shape[2], + train_data.inputs.shape[3]], 'data-inputs') +data_targets = tf.placeholder(tf.int32, [batch_size], 'data-targets') + +training_phase = tf.placeholder(tf.bool, name='training-flag') +rotate_data = tf.placeholder(tf.bool, name='rotate-flag') +dropout_rate = tf.placeholder(tf.float32, name='dropout-prob') + +classifier_network = ClassifierNetworkGraph(input_x=data_inputs, target_placeholder=data_targets, + dropout_rate=dropout_rate, batch_size=batch_size, + n_classes=train_data.num_classes, + is_training=training_phase, augment_rotate_flag=rotate_data, + strided_dim_reduction=strided_dim_reduction, + use_batch_normalization=batch_norm) # initialize our computational graph + +if continue_from_epoch == -1: # if this is a new experiment and not continuation of a previous one then generate a new + # statistics file + save_statistics(logs_filepath, "result_summary_statistics", ["epoch", "train_c_loss", "train_c_accuracy", + "val_c_loss", "val_c_accuracy", + "test_c_loss", "test_c_accuracy"], create=True) + +start_epoch = continue_from_epoch if continue_from_epoch != -1 else 0 # if new experiment start from 0 otherwise +# continue where left off + +summary_op, losses_ops, c_error_opt_op = classifier_network.init_train() # get graph operations (ops) + +total_train_batches = train_data.num_batches +total_val_batches = val_data.num_batches +total_test_batches = test_data.num_batches + +if tensorboard_enable: + print("saved tensorboard file at", logs_filepath) + writer = tf.summary.FileWriter(logs_filepath, graph=tf.get_default_graph()) + +init = tf.global_variables_initializer() # initialization op for the graph + +with tf.Session() as sess: + sess.run(init) # actually running the initialization op + train_saver = tf.train.Saver() # saver object that will save our graph so we can reload it later for continuation of + val_saver = tf.train.Saver() + best_val_accuracy = 0. + best_epoch = 0 + # training or inference + + if continue_from_epoch != -1: + train_saver.restore(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, + continue_from_epoch)) # restore previous graph to continue operations + best_val_accuracy, best_epoch = get_best_validation_model_statistics(logs_filepath, "result_summary_statistics") + print(best_val_accuracy, best_epoch) + + with tqdm.tqdm(total=epochs - start_epoch) as epoch_pbar: + for e in range(start_epoch, epochs): + total_c_loss = 0. + total_accuracy = 0. + with tqdm.tqdm(total=total_train_batches) as pbar_train: + for batch_idx, (x_batch, y_batch) in enumerate(train_data): + iter_id = e * total_train_batches + batch_idx + _, c_loss_value, acc = sess.run( + [c_error_opt_op, losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + # Here we execute the c_error_opt_op which trains the network and also the ops that compute the + # loss and accuracy, we save those in _, c_loss_value and acc respectively. + total_c_loss += c_loss_value # add loss of current iter to sum + total_accuracy += acc # add acc of current iter to sum + + iter_out = "iter_num: {}, train_loss: {}, train_accuracy: {}".format(iter_id, + total_c_loss / (batch_idx + 1), + total_accuracy / ( + batch_idx + 1)) # show + # iter statistics using running averages of previous iter within this epoch + pbar_train.set_description(iter_out) + pbar_train.update(1) + if tensorboard_enable and batch_idx % 25 == 0: # save tensorboard summary every 25 iterations + _summary = sess.run( + summary_op, + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + writer.add_summary(_summary, global_step=iter_id) + + total_c_loss /= total_train_batches # compute mean of los + total_accuracy /= total_train_batches # compute mean of accuracy + + save_path = train_saver.save(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + # save graph and weights + print("Saved current model at", save_path) + + total_val_c_loss = 0. + total_val_accuracy = 0. # run validation stage, note how training_phase placeholder is set to False + # and that we do not run the c_error_opt_op which runs gradient descent, but instead only call the loss ops + # to collect losses on the validation set + with tqdm.tqdm(total=total_val_batches) as pbar_val: + for batch_idx, (x_batch, y_batch) in enumerate(val_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_val_c_loss += c_loss_value + total_val_accuracy += acc + iter_out = "val_loss: {}, val_accuracy: {}".format(total_val_c_loss / (batch_idx + 1), + total_val_accuracy / (batch_idx + 1)) + pbar_val.set_description(iter_out) + pbar_val.update(1) + + total_val_c_loss /= total_val_batches + total_val_accuracy /= total_val_batches + + if best_val_accuracy < total_val_accuracy: # check if val acc better than the previous best and if + # so save current as best and save the model as the best validation model to be used on the test set + # after the final epoch + best_val_accuracy = total_val_accuracy + best_epoch = e + save_path = val_saver.save(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + print("Saved best validation score model at", save_path) + + epoch_pbar.update(1) + # save statistics of this epoch, train and val without test set performance + save_statistics(logs_filepath, "result_summary_statistics", + [e, total_c_loss, total_accuracy, total_val_c_loss, total_val_accuracy, + -1, -1]) + + val_saver.restore(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, best_epoch)) + # restore model with best performance on validation set + total_test_c_loss = 0. + total_test_accuracy = 0. + # computer test loss and accuracy and save + with tqdm.tqdm(total=total_test_batches) as pbar_test: + for batch_idx, (x_batch, y_batch) in enumerate(test_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_test_c_loss += c_loss_value + total_test_accuracy += acc + iter_out = "test_loss: {}, test_accuracy: {}".format(total_test_c_loss / (batch_idx + 1), + total_test_accuracy / (batch_idx + 1)) + pbar_test.set_description(iter_out) + pbar_test.update(1) + + total_test_c_loss /= total_test_batches + total_test_accuracy /= total_test_batches + + save_statistics(logs_filepath, "result_summary_statistics", + ["test set performance", -1, -1, -1, -1, + total_test_c_loss, total_test_accuracy]) diff --git a/gpu_cluster_environment_script.sh b/gpu_cluster_environment_script.sh new file mode 100644 index 0000000..d20d13d --- /dev/null +++ b/gpu_cluster_environment_script.sh @@ -0,0 +1,21 @@ +#!/bin/sh +#To be used before srun so that interactive sessions are run with gpu support +export CUDA_HOME=/opt/cuda-8.0.44 + +export CUDNN_HOME=/opt/cuDNN-6.0_8.0 + +export STUDENT_ID=$(whoami) + +export LD_LIBRARY_PATH=${CUDNN_HOME}/lib64:${CUDA_HOME}/lib64:$LD_LIBRARY_PATH + +export LIBRARY_PATH=${CUDNN_HOME}/lib64:$LIBRARY_PATH + +export CPATH=${CUDNN_HOME}/include:$CPATH + +export PATH=${CUDA_HOME}/bin:${PATH} + +export PYTHON_PATH=$PATH + +# Activate the relevant virtual environment: + +source /home/${STUDENT_ID}/miniconda3/bin/activate mlp diff --git a/gpu_cluster_tutorial_training_script.sh b/gpu_cluster_tutorial_training_script.sh new file mode 100644 index 0000000..8f6c963 --- /dev/null +++ b/gpu_cluster_tutorial_training_script.sh @@ -0,0 +1,33 @@ +#!/bin/sh +#SBATCH -N 1 # nodes requested +#SBATCH -n 1 # tasks requested +#SBATCH --gres=gpu:1 +#SBATCH --mem=16000 # memory in Mb +#SBATCH -o sample_experiment_outfile # send stdout to sample_experiment_outfile +#SBATCH -e sample_experiment_errfile # send stderr to sample_experiment_errfile +#SBATCH -t 2:00:00 # time requested in hour:minute:secon +export CUDA_HOME=/opt/cuda-8.0.44 + +export CUDNN_HOME=/opt/cuDNN-6.0_8.0 + +export STUDENT_ID=$(whoami) + +export LD_LIBRARY_PATH=${CUDNN_HOME}/lib64:${CUDA_HOME}/lib64:$LD_LIBRARY_PATH + +export LIBRARY_PATH=${CUDNN_HOME}/lib64:$LIBRARY_PATH + +export CPATH=${CUDNN_HOME}/include:$CPATH + +export PATH=${CUDA_HOME}/bin:${PATH} + +export PYTHON_PATH=$PATH + +mkdir -p /disk/scratch/${STUDENT_ID} + +export TMPDIR=/disk/scratch/${STUDENT_ID}/ +export TMP=/disk/scratch/${STUDENT_ID}/ +# Activate the relevant virtual environment: + +source /home/${STUDENT_ID}/miniconda3/bin/activate mlp + +python emnist_network_trainer.py --batch_size 128 --epochs 200 --experiment_prefix vgg-net-emnist-sample-exp --dropout_rate 0.4 --batch_norm_use True --strided_dim_reduction True --seed 25012018 diff --git a/mlp/__init__.py b/mlp/__init__.py deleted file mode 100644 index b41e667..0000000 --- a/mlp/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# -*- coding: utf-8 -*- -"""Machine Learning Practical package.""" - -__authors__ = ['Pawel Swietojanski', 'Steve Renals', 'Matt Graham'] - -DEFAULT_SEED = 123456 # Default random number generator seed if none provided. diff --git a/mlp/data_providers.py b/mlp/data_providers.py deleted file mode 100644 index aac04ee..0000000 --- a/mlp/data_providers.py +++ /dev/null @@ -1,401 +0,0 @@ -# -*- coding: utf-8 -*- -"""Data providers. - -This module provides classes for loading datasets and iterating over batches of -data points. -""" - -import pickle -import gzip -import numpy as np -import os -from mlp import DEFAULT_SEED - - -class DataProvider(object): - """Generic data provider.""" - - def __init__(self, inputs, targets, batch_size, max_num_batches=-1, - shuffle_order=True, rng=None): - """Create a new data provider object. - - Args: - inputs (ndarray): Array of data input features of shape - (num_data, input_dim). - targets (ndarray): Array of data output targets of shape - (num_data, output_dim) or (num_data,) if output_dim == 1. - batch_size (int): Number of data points to include in each batch. - max_num_batches (int): Maximum number of batches to iterate over - in an epoch. If `max_num_batches * batch_size > num_data` then - only as many batches as the data can be split into will be - used. If set to -1 all of the data will be used. - shuffle_order (bool): Whether to randomly permute the order of - the data before each epoch. - rng (RandomState): A seeded random number generator. - """ - self.inputs = inputs - self.targets = targets - if batch_size < 1: - raise ValueError('batch_size must be >= 1') - self._batch_size = batch_size - if max_num_batches == 0 or max_num_batches < -1: - raise ValueError('max_num_batches must be -1 or > 0') - self._max_num_batches = max_num_batches - self._update_num_batches() - self.shuffle_order = shuffle_order - self._current_order = np.arange(inputs.shape[0]) - if rng is None: - rng = np.random.RandomState(DEFAULT_SEED) - self.rng = rng - self.new_epoch() - - @property - def batch_size(self): - """Number of data points to include in each batch.""" - return self._batch_size - - @batch_size.setter - def batch_size(self, value): - if value < 1: - raise ValueError('batch_size must be >= 1') - self._batch_size = value - self._update_num_batches() - - @property - def max_num_batches(self): - """Maximum number of batches to iterate over in an epoch.""" - return self._max_num_batches - - @max_num_batches.setter - def max_num_batches(self, value): - if value == 0 or value < -1: - raise ValueError('max_num_batches must be -1 or > 0') - self._max_num_batches = value - self._update_num_batches() - - def _update_num_batches(self): - """Updates number of batches to iterate over.""" - # maximum possible number of batches is equal to number of whole times - # batch_size divides in to the number of data points which can be - # found using integer division - possible_num_batches = self.inputs.shape[0] // self.batch_size - if self.max_num_batches == -1: - self.num_batches = possible_num_batches - else: - self.num_batches = min(self.max_num_batches, possible_num_batches) - - def __iter__(self): - """Implements Python iterator interface. - - This should return an object implementing a `next` method which steps - through a sequence returning one element at a time and raising - `StopIteration` when at the end of the sequence. Here the object - returned is the DataProvider itself. - """ - return self - - def new_epoch(self): - """Starts a new epoch (pass through data), possibly shuffling first.""" - self._curr_batch = 0 - if self.shuffle_order: - self.shuffle() - - def __next__(self): - return self.next() - - def reset(self): - """Resets the provider to the initial state.""" - inv_perm = np.argsort(self._current_order) - self._current_order = self._current_order[inv_perm] - self.inputs = self.inputs[inv_perm] - self.targets = self.targets[inv_perm] - self.new_epoch() - - def shuffle(self): - """Randomly shuffles order of data.""" - perm = self.rng.permutation(self.inputs.shape[0]) - self._current_order = self._current_order[perm] - self.inputs = self.inputs[perm] - self.targets = self.targets[perm] - - def next(self): - """Returns next data batch or raises `StopIteration` if at end.""" - if self._curr_batch + 1 > self.num_batches: - # no more batches in current iteration through data set so start - # new epoch ready for another pass and indicate iteration is at end - self.new_epoch() - raise StopIteration() - # create an index slice corresponding to current batch number - batch_slice = slice(self._curr_batch * self.batch_size, - (self._curr_batch + 1) * self.batch_size) - inputs_batch = self.inputs[batch_slice] - targets_batch = self.targets[batch_slice] - self._curr_batch += 1 - return inputs_batch, targets_batch - -class MNISTDataProvider(DataProvider): - """Data provider for MNIST handwritten digit images.""" - - def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, - shuffle_order=True, rng=None): - """Create a new MNIST data provider object. - - Args: - which_set: One of 'train', 'valid' or 'eval'. Determines which - portion of the MNIST data this object should provide. - batch_size (int): Number of data points to include in each batch. - max_num_batches (int): Maximum number of batches to iterate over - in an epoch. If `max_num_batches * batch_size > num_data` then - only as many batches as the data can be split into will be - used. If set to -1 all of the data will be used. - shuffle_order (bool): Whether to randomly permute the order of - the data before each epoch. - rng (RandomState): A seeded random number generator. - """ - # check a valid which_set was provided - assert which_set in ['train', 'valid', 'test'], ( - 'Expected which_set to be either train, valid or eval. ' - 'Got {0}'.format(which_set) - ) - self.which_set = which_set - self.num_classes = 10 - # construct path to data using os.path.join to ensure the correct path - # separator for the current platform / OS is used - # MLP_DATA_DIR environment variable should point to the data directory - data_path = os.path.join( - os.environ['MLP_DATA_DIR'], 'mnist-{0}.npz'.format(which_set)) - assert os.path.isfile(data_path), ( - 'Data file does not exist at expected path: ' + data_path - ) - # load data from compressed numpy file - loaded = np.load(data_path) - inputs, targets = loaded['inputs'], loaded['targets'] - inputs = inputs.astype(np.float32) - # pass the loaded data to the parent class __init__ - super(MNISTDataProvider, self).__init__( - inputs, targets, batch_size, max_num_batches, shuffle_order, rng) - - def next(self): - """Returns next data batch or raises `StopIteration` if at end.""" - inputs_batch, targets_batch = super(MNISTDataProvider, self).next() - return inputs_batch, self.to_one_of_k(targets_batch) - - def to_one_of_k(self, int_targets): - """Converts integer coded class target to 1 of K coded targets. - - Args: - int_targets (ndarray): Array of integer coded class targets (i.e. - where an integer from 0 to `num_classes` - 1 is used to - indicate which is the correct class). This should be of shape - (num_data,). - - Returns: - Array of 1 of K coded targets i.e. an array of shape - (num_data, num_classes) where for each row all elements are equal - to zero except for the column corresponding to the correct class - which is equal to one. - """ - one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) - one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 - return one_of_k_targets - -class EMNISTDataProvider(DataProvider): - """Data provider for EMNIST handwritten digit images.""" - - def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, - shuffle_order=True, rng=None): - """Create a new EMNIST data provider object. - - Args: - which_set: One of 'train', 'valid' or 'eval'. Determines which - portion of the EMNIST data this object should provide. - batch_size (int): Number of data points to include in each batch. - max_num_batches (int): Maximum number of batches to iterate over - in an epoch. If `max_num_batches * batch_size > num_data` then - only as many batches as the data can be split into will be - used. If set to -1 all of the data will be used. - shuffle_order (bool): Whether to randomly permute the order of - the data before each epoch. - rng (RandomState): A seeded random number generator. - """ - # check a valid which_set was provided - assert which_set in ['train', 'valid', 'test'], ( - 'Expected which_set to be either train, valid or eval. ' - 'Got {0}'.format(which_set) - ) - self.which_set = which_set - self.num_classes = 47 - # construct path to data using os.path.join to ensure the correct path - # separator for the current platform / OS is used - # MLP_DATA_DIR environment variable should point to the data directory - data_path = os.path.join( - os.environ['MLP_DATA_DIR'], 'emnist-{0}.npz'.format(which_set)) - assert os.path.isfile(data_path), ( - 'Data file does not exist at expected path: ' + data_path - ) - # load data from compressed numpy file - loaded = np.load(data_path) - print(loaded.keys()) - inputs, targets = loaded['inputs'], loaded['targets'] - inputs = inputs.astype(np.float32) - inputs = np.reshape(inputs, newshape=(-1, 28*28)) - inputs = inputs / 255.0 - # pass the loaded data to the parent class __init__ - super(EMNISTDataProvider, self).__init__( - inputs, targets, batch_size, max_num_batches, shuffle_order, rng) - - def next(self): - """Returns next data batch or raises `StopIteration` if at end.""" - inputs_batch, targets_batch = super(EMNISTDataProvider, self).next() - return inputs_batch, self.to_one_of_k(targets_batch) - - def to_one_of_k(self, int_targets): - """Converts integer coded class target to 1 of K coded targets. - - Args: - int_targets (ndarray): Array of integer coded class targets (i.e. - where an integer from 0 to `num_classes` - 1 is used to - indicate which is the correct class). This should be of shape - (num_data,). - - Returns: - Array of 1 of K coded targets i.e. an array of shape - (num_data, num_classes) where for each row all elements are equal - to zero except for the column corresponding to the correct class - which is equal to one. - """ - one_of_k_targets = np.zeros((int_targets.shape[0], self.num_classes)) - one_of_k_targets[range(int_targets.shape[0]), int_targets] = 1 - return one_of_k_targets - - -class MetOfficeDataProvider(DataProvider): - """South Scotland Met Office weather data provider.""" - - def __init__(self, window_size, batch_size=10, max_num_batches=-1, - shuffle_order=True, rng=None): - """Create a new Met Office data provider object. - - Args: - window_size (int): Size of windows to split weather time series - data into. The constructed input features will be the first - `window_size - 1` entries in each window and the target outputs - the last entry in each window. - batch_size (int): Number of data points to include in each batch. - max_num_batches (int): Maximum number of batches to iterate over - in an epoch. If `max_num_batches * batch_size > num_data` then - only as many batches as the data can be split into will be - used. If set to -1 all of the data will be used. - shuffle_order (bool): Whether to randomly permute the order of - the data before each epoch. - rng (RandomState): A seeded random number generator. - """ - data_path = os.path.join( - os.environ['MLP_DATA_DIR'], 'HadSSP_daily_qc.txt') - assert os.path.isfile(data_path), ( - 'Data file does not exist at expected path: ' + data_path - ) - raw = np.loadtxt(data_path, skiprows=3, usecols=range(2, 32)) - assert window_size > 1, 'window_size must be at least 2.' - self.window_size = window_size - # filter out all missing datapoints and flatten to a vector - filtered = raw[raw >= 0].flatten() - # normalise data to zero mean, unit standard deviation - mean = np.mean(filtered) - std = np.std(filtered) - normalised = (filtered - mean) / std - # create a view on to array corresponding to a rolling window - shape = (normalised.shape[-1] - self.window_size + 1, self.window_size) - strides = normalised.strides + (normalised.strides[-1],) - windowed = np.lib.stride_tricks.as_strided( - normalised, shape=shape, strides=strides) - # inputs are first (window_size - 1) entries in windows - inputs = windowed[:, :-1] - # targets are last entry in windows - targets = windowed[:, -1] - super(MetOfficeDataProvider, self).__init__( - inputs, targets, batch_size, max_num_batches, shuffle_order, rng) - -class CCPPDataProvider(DataProvider): - - def __init__(self, which_set='train', input_dims=None, batch_size=10, - max_num_batches=-1, shuffle_order=True, rng=None): - """Create a new Combined Cycle Power Plant data provider object. - - Args: - which_set: One of 'train' or 'valid'. Determines which portion of - data this object should provide. - input_dims: Which of the four input dimension to use. If `None` all - are used. If an iterable of integers are provided (consisting - of a subset of {0, 1, 2, 3}) then only the corresponding - input dimensions are included. - batch_size (int): Number of data points to include in each batch. - max_num_batches (int): Maximum number of batches to iterate over - in an epoch. If `max_num_batches * batch_size > num_data` then - only as many batches as the data can be split into will be - used. If set to -1 all of the data will be used. - shuffle_order (bool): Whether to randomly permute the order of - the data before each epoch. - rng (RandomState): A seeded random number generator. - """ - data_path = os.path.join( - os.environ['MLP_DATA_DIR'], 'ccpp_data.npz') - assert os.path.isfile(data_path), ( - 'Data file does not exist at expected path: ' + data_path - ) - # check a valid which_set was provided - assert which_set in ['train', 'valid'], ( - 'Expected which_set to be either train or valid ' - 'Got {0}'.format(which_set) - ) - # check input_dims are valid - if not input_dims is not None: - input_dims = set(input_dims) - assert input_dims.issubset({0, 1, 2, 3}), ( - 'input_dims should be a subset of {0, 1, 2, 3}' - ) - loaded = np.load(data_path) - inputs = loaded[which_set + '_inputs'] - if input_dims is not None: - inputs = inputs[:, input_dims] - targets = loaded[which_set + '_targets'] - super(CCPPDataProvider, self).__init__( - inputs, targets, batch_size, max_num_batches, shuffle_order, rng) - - -class AugmentedMNISTDataProvider(MNISTDataProvider): - """Data provider for MNIST dataset which randomly transforms images.""" - - def __init__(self, which_set='train', batch_size=100, max_num_batches=-1, - shuffle_order=True, rng=None, transformer=None): - """Create a new augmented MNIST data provider object. - - Args: - which_set: One of 'train', 'valid' or 'test'. Determines which - portion of the MNIST data this object should provide. - batch_size (int): Number of data points to include in each batch. - max_num_batches (int): Maximum number of batches to iterate over - in an epoch. If `max_num_batches * batch_size > num_data` then - only as many batches as the data can be split into will be - used. If set to -1 all of the data will be used. - shuffle_order (bool): Whether to randomly permute the order of - the data before each epoch. - rng (RandomState): A seeded random number generator. - transformer: Function which takes an `inputs` array of shape - (batch_size, input_dim) corresponding to a batch of input - images and a `rng` random number generator object (i.e. a - call signature `transformer(inputs, rng)`) and applies a - potentiall random set of transformations to some / all of the - input images as each new batch is returned when iterating over - the data provider. - """ - super(AugmentedMNISTDataProvider, self).__init__( - which_set, batch_size, max_num_batches, shuffle_order, rng) - self.transformer = transformer - - def next(self): - """Returns next data batch or raises `StopIteration` if at end.""" - inputs_batch, targets_batch = super( - AugmentedMNISTDataProvider, self).next() - transformed_inputs_batch = self.transformer(inputs_batch, self.rng) - return transformed_inputs_batch, targets_batch diff --git a/mlp/errors.py b/mlp/errors.py deleted file mode 100644 index 3f0ae4f..0000000 --- a/mlp/errors.py +++ /dev/null @@ -1,176 +0,0 @@ -# -*- coding: utf-8 -*- -"""Error functions. - -This module defines error functions, with the aim of model training being to -minimise the error function given a set of inputs and target outputs. - -The error functions will typically measure some concept of distance between the -model outputs and target outputs, averaged over all data points in the data set -or batch. -""" - -import numpy as np - - -class SumOfSquaredDiffsError(object): - """Sum of squared differences (squared Euclidean distance) error.""" - - def __call__(self, outputs, targets): - """Calculates error function given a batch of outputs and targets. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Scalar cost function value. - """ - return 0.5 * np.mean(np.sum((outputs - targets)**2, axis=1)) - - def grad(self, outputs, targets): - """Calculates gradient of error function with respect to outputs. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Gradient of error function with respect to outputs. - """ - return (outputs - targets) / outputs.shape[0] - - def __repr__(self): - return 'MeanSquaredErrorCost' - - -class BinaryCrossEntropyError(object): - """Binary cross entropy error.""" - - def __call__(self, outputs, targets): - """Calculates error function given a batch of outputs and targets. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Scalar error function value. - """ - return -np.mean( - targets * np.log(outputs) + (1. - targets) * np.log(1. - ouputs)) - - def grad(self, outputs, targets): - """Calculates gradient of error function with respect to outputs. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Gradient of error function with respect to outputs. - """ - return ((1. - targets) / (1. - outputs) - - (targets / outputs)) / outputs.shape[0] - - def __repr__(self): - return 'BinaryCrossEntropyError' - - -class BinaryCrossEntropySigmoidError(object): - """Binary cross entropy error with logistic sigmoid applied to outputs.""" - - def __call__(self, outputs, targets): - """Calculates error function given a batch of outputs and targets. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Scalar error function value. - """ - probs = 1. / (1. + np.exp(-outputs)) - return -np.mean( - targets * np.log(probs) + (1. - targets) * np.log(1. - probs)) - - def grad(self, outputs, targets): - """Calculates gradient of error function with respect to outputs. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Gradient of error function with respect to outputs. - """ - probs = 1. / (1. + np.exp(-outputs)) - return (probs - targets) / outputs.shape[0] - - def __repr__(self): - return 'BinaryCrossEntropySigmoidError' - - -class CrossEntropyError(object): - """Multi-class cross entropy error.""" - - def __call__(self, outputs, targets): - """Calculates error function given a batch of outputs and targets. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Scalar error function value. - """ - return -np.mean(np.sum(targets * np.log(outputs), axis=1)) - - def grad(self, outputs, targets): - """Calculates gradient of error function with respect to outputs. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Gradient of error function with respect to outputs. - """ - return -(targets / outputs) / outputs.shape[0] - - def __repr__(self): - return 'CrossEntropyError' - - -class CrossEntropySoftmaxError(object): - """Multi-class cross entropy error with Softmax applied to outputs.""" - - def __call__(self, outputs, targets): - """Calculates error function given a batch of outputs and targets. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Scalar error function value. - """ - normOutputs = outputs - outputs.max(-1)[:, None] - logProb = normOutputs - np.log(np.sum(np.exp(normOutputs), axis=-1)[:, None]) - return -np.mean(np.sum(targets * logProb, axis=1)) - - def grad(self, outputs, targets): - """Calculates gradient of error function with respect to outputs. - - Args: - outputs: Array of model outputs of shape (batch_size, output_dim). - targets: Array of target outputs of shape (batch_size, output_dim). - - Returns: - Gradient of error function with respect to outputs. - """ - probs = np.exp(outputs - outputs.max(-1)[:, None]) - probs /= probs.sum(-1)[:, None] - return (probs - targets) / outputs.shape[0] - - def __repr__(self): - return 'CrossEntropySoftmaxError' diff --git a/mlp/initialisers.py b/mlp/initialisers.py deleted file mode 100644 index 8c8e252..0000000 --- a/mlp/initialisers.py +++ /dev/null @@ -1,143 +0,0 @@ -# -*- coding: utf-8 -*- -"""Parameter initialisers. - -This module defines classes to initialise the parameters in a layer. -""" - -import numpy as np -from mlp import DEFAULT_SEED - - -class ConstantInit(object): - """Constant parameter initialiser.""" - - def __init__(self, value): - """Construct a constant parameter initialiser. - - Args: - value: Value to initialise parameter to. - """ - self.value = value - - def __call__(self, shape): - return np.ones(shape=shape) * self.value - - -class UniformInit(object): - """Random uniform parameter initialiser.""" - - def __init__(self, low, high, rng=None): - """Construct a random uniform parameter initialiser. - - Args: - low: Lower bound of interval to sample from. - high: Upper bound of interval to sample from. - rng (RandomState): Seeded random number generator. - """ - self.low = low - self.high = high - if rng is None: - rng = np.random.RandomState(DEFAULT_SEED) - self.rng = rng - - def __call__(self, shape): - return self.rng.uniform(low=self.low, high=self.high, size=shape) - - -class NormalInit(object): - """Random normal parameter initialiser.""" - - def __init__(self, mean, std, rng=None): - """Construct a random uniform parameter initialiser. - - Args: - mean: Mean of distribution to sample from. - std: Standard deviation of distribution to sample from. - rng (RandomState): Seeded random number generator. - """ - self.mean = mean - self.std = std - if rng is None: - rng = np.random.RandomState(DEFAULT_SEED) - self.rng = rng - - def __call__(self, shape): - return self.rng.normal(loc=self.mean, scale=self.std, size=shape) - -class GlorotUniformInit(object): - """Glorot and Bengio (2010) random uniform weights initialiser. - - Initialises an two-dimensional parameter array using the 'normalized - initialisation' scheme suggested in [1] which attempts to maintain a - roughly constant variance in the activations and backpropagated gradients - of a multi-layer model consisting of interleaved affine and logistic - sigmoidal transformation layers. - - Weights are sampled from a zero-mean uniform distribution with standard - deviation `sqrt(2 / (input_dim * output_dim))` where `input_dim` and - `output_dim` are the input and output dimensions of the weight matrix - respectively. - - References: - [1]: Understanding the difficulty of training deep feedforward neural - networks, Glorot and Bengio (2010) - """ - - def __init__(self, gain=1., rng=None): - """Construct a normalised initilisation random initialiser object. - - Args: - gain: Multiplicative factor to scale initialised weights by. - Recommended values is 1 for affine layers followed by - logistic sigmoid layers (or another affine layer). - rng (RandomState): Seeded random number generator. - """ - self.gain = gain - if rng is None: - rng = np.random.RandomState(DEFAULT_SEED) - self.rng = rng - - def __call__(self, shape): - assert len(shape) == 2, ( - 'Initialiser should only be used for two dimensional arrays.') - std = self.gain * (2. / (shape[0] + shape[1]))**0.5 - half_width = 3.**0.5 * std - return self.rng.uniform(low=-half_width, high=half_width, size=shape) - - -class GlorotNormalInit(object): - """Glorot and Bengio (2010) random normal weights initialiser. - - Initialises an two-dimensional parameter array using the 'normalized - initialisation' scheme suggested in [1] which attempts to maintain a - roughly constant variance in the activations and backpropagated gradients - of a multi-layer model consisting of interleaved affine and logistic - sigmoidal transformation layers. - - Weights are sampled from a zero-mean normal distribution with standard - deviation `sqrt(2 / (input_dim * output_dim))` where `input_dim` and - `output_dim` are the input and output dimensions of the weight matrix - respectively. - - References: - [1]: Understanding the difficulty of training deep feedforward neural - networks, Glorot and Bengio (2010) - """ - - def __init__(self, gain=1., rng=None): - """Construct a normalised initilisation random initialiser object. - - Args: - gain: Multiplicative factor to scale initialised weights by. - Recommended values is 1 for affine layers followed by - logistic sigmoid layers (or another affine layer). - rng (RandomState): Seeded random number generator. - """ - self.gain = gain - if rng is None: - rng = np.random.RandomState(DEFAULT_SEED) - self.rng = rng - - def __call__(self, shape): - std = self.gain * (2. / (shape[0] + shape[1]))**0.5 - return self.rng.normal(loc=0., scale=std, size=shape) diff --git a/mlp/layers.py b/mlp/layers.py deleted file mode 100644 index 6393803..0000000 --- a/mlp/layers.py +++ /dev/null @@ -1,1002 +0,0 @@ -# -*- coding: utf-8 -*- -"""Layer definitions. - -This module defines classes which encapsulate a single layer. - -These layers map input activations to output activation with the `fprop` -method and map gradients with repsect to outputs to gradients with respect to -their inputs with the `bprop` method. - -Some layers will have learnable parameters and so will additionally define -methods for getting and setting parameter and calculating gradients with -respect to the layer parameters. -""" - -import numpy as np -import mlp.initialisers as init -from mlp import DEFAULT_SEED - -class Layer(object): - """Abstract class defining the interface for a layer.""" - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - raise NotImplementedError() - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - raise NotImplementedError() - - -class LayerWithParameters(Layer): - """Abstract class defining the interface for a layer with parameters.""" - - def grads_wrt_params(self, inputs, grads_wrt_outputs): - """Calculates gradients with respect to layer parameters. - - Args: - inputs: Array of inputs to layer of shape (batch_size, input_dim). - grads_wrt_to_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - List of arrays of gradients with respect to the layer parameters - with parameter gradients appearing in same order in tuple as - returned from `get_params` method. - """ - raise NotImplementedError() - - def params_penalty(self): - """Returns the parameter dependent penalty term for this layer. - - If no parameter-dependent penalty terms are set this returns zero. - """ - raise NotImplementedError() - - @property - def params(self): - """Returns a list of parameters of layer. - - Returns: - List of current parameter values. This list should be in the - corresponding order to the `values` argument to `set_params`. - """ - raise NotImplementedError() - - @params.setter - def params(self, values): - """Sets layer parameters from a list of values. - - Args: - values: List of values to set parameters to. This list should be - in the corresponding order to what is returned by `get_params`. - """ - raise NotImplementedError() - -class StochasticLayerWithParameters(Layer): - """Specialised layer which uses a stochastic forward propagation.""" - - def __init__(self, rng=None): - """Constructs a new StochasticLayer object. - - Args: - rng (RandomState): Seeded random number generator object. - """ - if rng is None: - rng = np.random.RandomState(DEFAULT_SEED) - self.rng = rng - - def fprop(self, inputs, stochastic=True): - """Forward propagates activations through the layer transformation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - stochastic: Flag allowing different deterministic - forward-propagation mode in addition to default stochastic - forward-propagation e.g. for use at test time. If False - a deterministic forward-propagation transformation - corresponding to the expected output of the stochastic - forward-propagation is applied. - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - raise NotImplementedError() - def grads_wrt_params(self, inputs, grads_wrt_outputs): - """Calculates gradients with respect to layer parameters. - - Args: - inputs: Array of inputs to layer of shape (batch_size, input_dim). - grads_wrt_to_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - List of arrays of gradients with respect to the layer parameters - with parameter gradients appearing in same order in tuple as - returned from `get_params` method. - """ - raise NotImplementedError() - - def params_penalty(self): - """Returns the parameter dependent penalty term for this layer. - - If no parameter-dependent penalty terms are set this returns zero. - """ - raise NotImplementedError() - - @property - def params(self): - """Returns a list of parameters of layer. - - Returns: - List of current parameter values. This list should be in the - corresponding order to the `values` argument to `set_params`. - """ - raise NotImplementedError() - - @params.setter - def params(self, values): - """Sets layer parameters from a list of values. - - Args: - values: List of values to set parameters to. This list should be - in the corresponding order to what is returned by `get_params`. - """ - raise NotImplementedError() - -class StochasticLayer(Layer): - """Specialised layer which uses a stochastic forward propagation.""" - - def __init__(self, rng=None): - """Constructs a new StochasticLayer object. - - Args: - rng (RandomState): Seeded random number generator object. - """ - if rng is None: - rng = np.random.RandomState(DEFAULT_SEED) - self.rng = rng - - def fprop(self, inputs, stochastic=True): - """Forward propagates activations through the layer transformation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - stochastic: Flag allowing different deterministic - forward-propagation mode in addition to default stochastic - forward-propagation e.g. for use at test time. If False - a deterministic forward-propagation transformation - corresponding to the expected output of the stochastic - forward-propagation is applied. - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - raise NotImplementedError() - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. This should correspond to - default stochastic forward-propagation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - raise NotImplementedError() - - -class AffineLayer(LayerWithParameters): - """Layer implementing an affine tranformation of its inputs. - - This layer is parameterised by a weight matrix and bias vector. - """ - - def __init__(self, input_dim, output_dim, - weights_initialiser=init.UniformInit(-0.1, 0.1), - biases_initialiser=init.ConstantInit(0.), - weights_penalty=None, biases_penalty=None): - """Initialises a parameterised affine layer. - - Args: - input_dim (int): Dimension of inputs to the layer. - output_dim (int): Dimension of the layer outputs. - weights_initialiser: Initialiser for the weight parameters. - biases_initialiser: Initialiser for the bias parameters. - weights_penalty: Weights-dependent penalty term (regulariser) or - None if no regularisation is to be applied to the weights. - biases_penalty: Biases-dependent penalty term (regulariser) or - None if no regularisation is to be applied to the biases. - """ - self.input_dim = input_dim - self.output_dim = output_dim - self.weights = weights_initialiser((self.output_dim, self.input_dim)) - self.biases = biases_initialiser(self.output_dim) - self.weights_penalty = weights_penalty - self.biases_penalty = biases_penalty - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x`, outputs `y`, weights `W` and biases `b` the layer - corresponds to `y = W.dot(x) + b`. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - return self.weights.dot(inputs.T).T + self.biases - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - return grads_wrt_outputs.dot(self.weights) - - def grads_wrt_params(self, inputs, grads_wrt_outputs): - """Calculates gradients with respect to layer parameters. - - Args: - inputs: array of inputs to layer of shape (batch_size, input_dim) - grads_wrt_to_outputs: array of gradients with respect to the layer - outputs of shape (batch_size, output_dim) - - Returns: - list of arrays of gradients with respect to the layer parameters - `[grads_wrt_weights, grads_wrt_biases]`. - """ - - grads_wrt_weights = np.dot(grads_wrt_outputs.T, inputs) - grads_wrt_biases = np.sum(grads_wrt_outputs, axis=0) - - if self.weights_penalty is not None: - grads_wrt_weights += self.weights_penalty.grad(self.weights) - - if self.biases_penalty is not None: - grads_wrt_biases += self.biases_penalty.grad(self.biases) - - return [grads_wrt_weights, grads_wrt_biases] - - def params_penalty(self): - """Returns the parameter dependent penalty term for this layer. - - If no parameter-dependent penalty terms are set this returns zero. - """ - params_penalty = 0 - if self.weights_penalty is not None: - params_penalty += self.weights_penalty(self.weights) - if self.biases_penalty is not None: - params_penalty += self.biases_penalty(self.biases) - return params_penalty - - @property - def params(self): - """A list of layer parameter values: `[weights, biases]`.""" - return [self.weights, self.biases] - - @params.setter - def params(self, values): - self.weights = values[0] - self.biases = values[1] - - def __repr__(self): - return 'AffineLayer(input_dim={0}, output_dim={1})'.format( - self.input_dim, self.output_dim) - -class BatchNormalizationLayer(StochasticLayerWithParameters): - """Layer implementing an affine tranformation of its inputs. - - This layer is parameterised by a weight matrix and bias vector. - """ - - def __init__(self, input_dim, rng=None): - """Initialises a parameterised affine layer. - - Args: - input_dim (int): Dimension of inputs to the layer. - output_dim (int): Dimension of the layer outputs. - weights_initialiser: Initialiser for the weight parameters. - biases_initialiser: Initialiser for the bias parameters. - weights_penalty: Weights-dependent penalty term (regulariser) or - None if no regularisation is to be applied to the weights. - biases_penalty: Biases-dependent penalty term (regulariser) or - None if no regularisation is to be applied to the biases. - """ - super(BatchNormalizationLayer, self).__init__(rng) - self.beta = np.random.normal(size=(input_dim)) - self.gamma = np.random.normal(size=(input_dim)) - self.epsilon = 0.00001 - self.cache = None - self.input_dim = input_dim - - def fprop(self, inputs, stochastic=True): - """Forward propagates inputs through a layer.""" - - raise NotImplementedError - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - - raise NotImplementedError - - def grads_wrt_params(self, inputs, grads_wrt_outputs): - """Calculates gradients with respect to layer parameters. - - Args: - inputs: array of inputs to layer of shape (batch_size, input_dim) - grads_wrt_to_outputs: array of gradients with respect to the layer - outputs of shape (batch_size, output_dim) - - Returns: - list of arrays of gradients with respect to the layer parameters - `[grads_wrt_weights, grads_wrt_biases]`. - """ - raise NotImplementedError - - def params_penalty(self): - """Returns the parameter dependent penalty term for this layer. - - If no parameter-dependent penalty terms are set this returns zero. - """ - params_penalty = 0 - - return params_penalty - - @property - def params(self): - """A list of layer parameter values: `[gammas, betas]`.""" - return [self.gamma, self.beta] - - @params.setter - def params(self, values): - self.gamma = values[0] - self.beta = values[1] - - def __repr__(self): - return 'BatchNormalizationLayer(input_dim={0})'.format( - self.input_dim) - - -class SigmoidLayer(Layer): - """Layer implementing an element-wise logistic sigmoid transformation.""" - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x` and outputs `y` this corresponds to - `y = 1 / (1 + exp(-x))`. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - return 1. / (1. + np.exp(-inputs)) - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - return grads_wrt_outputs * outputs * (1. - outputs) - - def __repr__(self): - return 'SigmoidLayer' - -class ConvolutionalLayer(LayerWithParameters): - """Layer implementing a 2D convolution-based transformation of its inputs. - The layer is parameterised by a set of 2D convolutional kernels, a four - dimensional array of shape - (num_output_channels, num_input_channels, kernel_dim_1, kernel_dim_2) - and a bias vector, a one dimensional array of shape - (num_output_channels,) - i.e. one shared bias per output channel. - Assuming no-padding is applied to the inputs so that outputs are only - calculated for positions where the kernel filters fully overlap with the - inputs, and that unit strides are used the outputs will have spatial extent - output_dim_1 = input_dim_1 - kernel_dim_1 + 1 - output_dim_2 = input_dim_2 - kernel_dim_2 + 1 - """ - - def __init__(self, num_input_channels, num_output_channels, - input_dim_1, input_dim_2, - kernel_dim_1, kernel_dim_2, - kernels_init=init.UniformInit(-0.01, 0.01), - biases_init=init.ConstantInit(0.), - kernels_penalty=None, biases_penalty=None): - """Initialises a parameterised convolutional layer. - Args: - num_input_channels (int): Number of channels in inputs to - layer (this may be number of colour channels in the input - images if used as the first layer in a model, or the - number of output channels, a.k.a. feature maps, from a - a previous convolutional layer). - num_output_channels (int): Number of channels in outputs - from the layer, a.k.a. number of feature maps. - input_dim_1 (int): Size of first input dimension of each 2D - channel of inputs. - input_dim_2 (int): Size of second input dimension of each 2D - channel of inputs. - kernel_dim_1 (int): Size of first dimension of each 2D channel of - kernels. - kernel_dim_2 (int): Size of second dimension of each 2D channel of - kernels. - kernels_intialiser: Initialiser for the kernel parameters. - biases_initialiser: Initialiser for the bias parameters. - kernels_penalty: Kernel-dependent penalty term (regulariser) or - None if no regularisation is to be applied to the kernels. - biases_penalty: Biases-dependent penalty term (regulariser) or - None if no regularisation is to be applied to the biases. - """ - self.num_input_channels = num_input_channels - self.num_output_channels = num_output_channels - self.input_dim_1 = input_dim_1 - self.input_dim_2 = input_dim_2 - self.kernel_dim_1 = kernel_dim_1 - self.kernel_dim_2 = kernel_dim_2 - self.kernels_init = kernels_init - self.biases_init = biases_init - self.kernels_shape = ( - num_output_channels, num_input_channels, kernel_dim_1, kernel_dim_2 - ) - self.inputs_shape = ( - None, num_input_channels, input_dim_1, input_dim_2 - ) - self.kernels = self.kernels_init(self.kernels_shape) - self.biases = self.biases_init(num_output_channels) - self.kernels_penalty = kernels_penalty - self.biases_penalty = biases_penalty - - self.cache = None - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - For inputs `x`, outputs `y`, kernels `K` and biases `b` the layer - corresponds to `y = conv2d(x, K) + b`. - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - raise NotImplementedError - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - Args: - inputs: Array of layer inputs of shape - (batch_size, num_input_channels, input_dim_1, input_dim_2). - outputs: Array of layer outputs calculated in forward pass of - shape - (batch_size, num_output_channels, output_dim_1, output_dim_2). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape - (batch_size, num_output_channels, output_dim_1, output_dim_2). - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - # Pad the grads_wrt_outputs - - raise NotImplementedError - - def grads_wrt_params(self, inputs, grads_wrt_outputs): - """Calculates gradients with respect to layer parameters. - Args: - inputs: array of inputs to layer of shape (batch_size, input_dim) - grads_wrt_to_outputs: array of gradients with respect to the layer - outputs of shape - (batch_size, num_output-_channels, output_dim_1, output_dim_2). - Returns: - list of arrays of gradients with respect to the layer parameters - `[grads_wrt_kernels, grads_wrt_biases]`. - """ - - raise NotImplementedError - - def params_penalty(self): - """Returns the parameter dependent penalty term for this layer. - If no parameter-dependent penalty terms are set this returns zero. - """ - params_penalty = 0 - if self.kernels_penalty is not None: - params_penalty += self.kernels_penalty(self.kernels) - if self.biases_penalty is not None: - params_penalty += self.biases_penalty(self.biases) - return params_penalty - - @property - def params(self): - """A list of layer parameter values: `[kernels, biases]`.""" - return [self.kernels, self.biases] - - @params.setter - def params(self, values): - self.kernels = values[0] - self.biases = values[1] - - def __repr__(self): - return ( - 'ConvolutionalLayer(\n' - ' num_input_channels={0}, num_output_channels={1},\n' - ' input_dim_1={2}, input_dim_2={3},\n' - ' kernel_dim_1={4}, kernel_dim_2={5}\n' - ')' - .format(self.num_input_channels, self.num_output_channels, - self.input_dim_1, self.input_dim_2, self.kernel_dim_1, - self.kernel_dim_2) - ) - - -class ReluLayer(Layer): - """Layer implementing an element-wise rectified linear transformation.""" - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - return np.maximum(inputs, 0.) - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - return (outputs > 0) * grads_wrt_outputs - - def __repr__(self): - return 'ReluLayer' - -class LeakyReluLayer(Layer): - """Layer implementing an element-wise rectified linear transformation.""" - def __init__(self, alpha=0.01): - self.alpha = alpha - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`. - """ - positive_inputs = np.maximum(inputs, 0.) - - negative_inputs = inputs - negative_inputs[negative_inputs>0] = 0. - negative_inputs = negative_inputs * self.alpha - - outputs = positive_inputs + negative_inputs - return outputs - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - """ - positive_gradients = (outputs > 0) * grads_wrt_outputs - negative_gradients = self.alpha * (outputs < 0) * grads_wrt_outputs - gradients = positive_gradients + negative_gradients - return gradients - - def __repr__(self): - return 'LeakyReluLayer' - -class ELULayer(Layer): - """Layer implementing an ELU activation.""" - def __init__(self, alpha=1.0): - self.alpha = alpha - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`. - """ - positive_inputs = np.maximum(inputs, 0.) - - negative_inputs = np.copy(inputs) - negative_inputs[negative_inputs>0] = 0. - negative_inputs = self.alpha * (np.exp(negative_inputs) - 1) - - outputs = positive_inputs + negative_inputs - return outputs - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - """ - positive_gradients = (outputs >= 0) * grads_wrt_outputs - outputs_to_use = (outputs < 0) * outputs - negative_gradients = (outputs_to_use + self.alpha) - negative_gradients[outputs >= 0] = 0. - negative_gradients = negative_gradients * grads_wrt_outputs - gradients = positive_gradients + negative_gradients - return gradients - - def __repr__(self): - return 'ELULayer' - -class SELULayer(Layer): - """Layer implementing an element-wise rectified linear transformation.""" - #α01 ≈ 1.6733 and λ01 ≈ 1.0507 - def __init__(self): - self.alpha = 1.6733 - self.lamda = 1.0507 - self.elu = ELULayer(alpha=self.alpha) - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x` and outputs `y` this corresponds to `y = max(0, x)`. - """ - outputs = self.lamda * self.elu.fprop(inputs) - return outputs - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - """ - scaled_outputs = outputs / self.lamda - gradients = self.lamda * self.elu.bprop(inputs=inputs, outputs=scaled_outputs, - grads_wrt_outputs=grads_wrt_outputs) - return gradients - - def __repr__(self): - return 'SELULayer' - -class TanhLayer(Layer): - """Layer implementing an element-wise hyperbolic tangent transformation.""" - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x` and outputs `y` this corresponds to `y = tanh(x)`. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - return np.tanh(inputs) - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - return (1. - outputs**2) * grads_wrt_outputs - - def __repr__(self): - return 'TanhLayer' - - -class SoftmaxLayer(Layer): - """Layer implementing a softmax transformation.""" - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - For inputs `x` and outputs `y` this corresponds to - - `y = exp(x) / sum(exp(x))`. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - # subtract max inside exponential to improve numerical stability - - # when we divide through by sum this term cancels - exp_inputs = np.exp(inputs - inputs.max(-1)[:, None]) - return exp_inputs / exp_inputs.sum(-1)[:, None] - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - return (outputs * (grads_wrt_outputs - - (grads_wrt_outputs * outputs).sum(-1)[:, None])) - - def __repr__(self): - return 'SoftmaxLayer' - - -class RadialBasisFunctionLayer(Layer): - """Layer implementing projection to a grid of radial basis functions.""" - - def __init__(self, grid_dim, intervals=[[0., 1.]]): - """Creates a radial basis function layer object. - - Args: - grid_dim: Integer specifying how many basis function to use in - grid across input space per dimension (so total number of - basis functions will be grid_dim**input_dim) - intervals: List of intervals (two element lists or tuples) - specifying extents of axis-aligned region in input-space to - tile basis functions in grid across. For example for a 2D input - space spanning [0, 1] x [0, 1] use intervals=[[0, 1], [0, 1]]. - """ - num_basis = grid_dim**len(intervals) - self.centres = np.array(np.meshgrid(*[ - np.linspace(low, high, grid_dim) for (low, high) in intervals]) - ).reshape((len(intervals), -1)) - self.scales = np.array([ - [(high - low) * 1. / grid_dim] for (low, high) in intervals]) - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - return np.exp(-(inputs[..., None] - self.centres[None, ...])**2 / - self.scales**2).reshape((inputs.shape[0], -1)) - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - num_basis = self.centres.shape[1] - return -2 * ( - ((inputs[..., None] - self.centres[None, ...]) / self.scales**2) * - grads_wrt_outputs.reshape((inputs.shape[0], -1, num_basis)) - ).sum(-1) - - def __repr__(self): - return 'RadialBasisFunctionLayer(grid_dim={0})'.format(self.grid_dim) - -class DropoutLayer(StochasticLayer): - """Layer which stochastically drops input dimensions in its output.""" - - def __init__(self, rng=None, incl_prob=0.5, share_across_batch=True): - """Construct a new dropout layer. - - Args: - rng (RandomState): Seeded random number generator. - incl_prob: Scalar value in (0, 1] specifying the probability of - each input dimension being included in the output. - share_across_batch: Whether to use same dropout mask across - all inputs in a batch or use per input masks. - """ - super(DropoutLayer, self).__init__(rng) - assert incl_prob > 0. and incl_prob <= 1. - self.incl_prob = incl_prob - self.share_across_batch = share_across_batch - self.rng = rng - - def fprop(self, inputs, stochastic=True): - """Forward propagates activations through the layer transformation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - stochastic: Flag allowing different deterministic - forward-propagation mode in addition to default stochastic - forward-propagation e.g. for use at test time. If False - a deterministic forward-propagation transformation - corresponding to the expected output of the stochastic - forward-propagation is applied. - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - if stochastic: - mask_shape = (1,) + inputs.shape[1:] if self.share_across_batch else inputs.shape - self._mask = (self.rng.uniform(size=mask_shape) < self.incl_prob) - return inputs * self._mask - else: - return inputs * self.incl_prob - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. This should correspond to - default stochastic forward-propagation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - return grads_wrt_outputs * self._mask - - def __repr__(self): - return 'DropoutLayer(incl_prob={0:.1f})'.format(self.incl_prob) - -class ReshapeLayer(Layer): - """Layer which reshapes dimensions of inputs.""" - - def __init__(self, output_shape=None): - """Create a new reshape layer object. - - Args: - output_shape: Tuple specifying shape each input in batch should - be reshaped to in outputs. This **excludes** the batch size - so the shape of the final output array will be - (batch_size, ) + output_shape - Similarly to numpy.reshape, one shape dimension can be -1. In - this case, the value is inferred from the size of the input - array and remaining dimensions. The shape specified must be - compatible with the input array shape - i.e. the total number - of values in the array cannot be changed. If set to `None` the - output shape will be set to - (batch_size, -1) - which will flatten all the inputs to vectors. - """ - self.output_shape = (-1,) if output_shape is None else output_shape - - def fprop(self, inputs): - """Forward propagates activations through the layer transformation. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - - Returns: - outputs: Array of layer outputs of shape (batch_size, output_dim). - """ - return inputs.reshape((inputs.shape[0],) + self.output_shape) - - def bprop(self, inputs, outputs, grads_wrt_outputs): - """Back propagates gradients through a layer. - - Given gradients with respect to the outputs of the layer calculates the - gradients with respect to the layer inputs. - - Args: - inputs: Array of layer inputs of shape (batch_size, input_dim). - outputs: Array of layer outputs calculated in forward pass of - shape (batch_size, output_dim). - grads_wrt_outputs: Array of gradients with respect to the layer - outputs of shape (batch_size, output_dim). - - Returns: - Array of gradients with respect to the layer inputs of shape - (batch_size, input_dim). - """ - return grads_wrt_outputs.reshape(inputs.shape) - - def __repr__(self): - return 'ReshapeLayer(output_shape={0})'.format(self.output_shape) diff --git a/mlp/learning_rules.py b/mlp/learning_rules.py deleted file mode 100644 index 22f2bcb..0000000 --- a/mlp/learning_rules.py +++ /dev/null @@ -1,162 +0,0 @@ -# -*- coding: utf-8 -*- -"""Learning rules. - -This module contains classes implementing gradient based learning rules. -""" - -import numpy as np - - -class GradientDescentLearningRule(object): - """Simple (stochastic) gradient descent learning rule. - - For a scalar error function `E(p[0], p_[1] ... )` of some set of - potentially multidimensional parameters this attempts to find a local - minimum of the loss function by applying updates to each parameter of the - form - - p[i] := p[i] - learning_rate * dE/dp[i] - - With `learning_rate` a positive scaling parameter. - - The error function used in successive applications of these updates may be - a stochastic estimator of the true error function (e.g. when the error with - respect to only a subset of data-points is calculated) in which case this - will correspond to a stochastic gradient descent learning rule. - """ - - def __init__(self, learning_rate=1e-3): - """Creates a new learning rule object. - - Args: - learning_rate: A postive scalar to scale gradient updates to the - parameters by. This needs to be carefully set - if too large - the learning dynamic will be unstable and may diverge, while - if set too small learning will proceed very slowly. - - """ - assert learning_rate > 0., 'learning_rate should be positive.' - self.learning_rate = learning_rate - - def initialise(self, params): - """Initialises the state of the learning rule for a set or parameters. - - This must be called before `update_params` is first called. - - Args: - params: A list of the parameters to be optimised. Note these will - be updated *in-place* to avoid reallocating arrays on each - update. - """ - self.params = params - - def reset(self): - """Resets any additional state variables to their intial values. - - For this learning rule there are no additional state variables so we - do nothing here. - """ - pass - - def update_params(self, grads_wrt_params): - """Applies a single gradient descent update to all parameters. - - All parameter updates are performed using in-place operations and so - nothing is returned. - - Args: - grads_wrt_params: A list of gradients of the scalar loss function - with respect to each of the parameters passed to `initialise` - previously, with this list expected to be in the same order. - """ - for param, grad in zip(self.params, grads_wrt_params): - param -= self.learning_rate * grad - - -class MomentumLearningRule(GradientDescentLearningRule): - """Gradient descent with momentum learning rule. - - This extends the basic gradient learning rule by introducing extra - momentum state variables for each parameter. These can help the learning - dynamic help overcome shallow local minima and speed convergence when - making multiple successive steps in a similar direction in parameter space. - - For parameter p[i] and corresponding momentum m[i] the updates for a - scalar loss function `L` are of the form - - m[i] := mom_coeff * m[i] - learning_rate * dL/dp[i] - p[i] := p[i] + m[i] - - with `learning_rate` a positive scaling parameter for the gradient updates - and `mom_coeff` a value in [0, 1] that determines how much 'friction' there - is the system and so how quickly previous momentum contributions decay. - """ - - def __init__(self, learning_rate=1e-3, mom_coeff=0.9): - """Creates a new learning rule object. - - Args: - learning_rate: A postive scalar to scale gradient updates to the - parameters by. This needs to be carefully set - if too large - the learning dynamic will be unstable and may diverge, while - if set too small learning will proceed very slowly. - mom_coeff: A scalar in the range [0, 1] inclusive. This determines - the contribution of the previous momentum value to the value - after each update. If equal to 0 the momentum is set to exactly - the negative scaled gradient each update and so this rule - collapses to standard gradient descent. If equal to 1 the - momentum will just be decremented by the scaled gradient at - each update. This is equivalent to simulating the dynamic in - a frictionless system. Due to energy conservation the loss - of 'potential energy' as the dynamics moves down the loss - function surface will lead to an increasingly large 'kinetic - energy' and so speed, meaning the updates will become - increasingly large, potentially unstably so. Typically a value - less than but close to 1 will avoid these issues and cause the - dynamic to converge to a local minima where the gradients are - by definition zero. - """ - super(MomentumLearningRule, self).__init__(learning_rate) - assert mom_coeff >= 0. and mom_coeff <= 1., ( - 'mom_coeff should be in the range [0, 1].' - ) - self.mom_coeff = mom_coeff - - def initialise(self, params): - """Initialises the state of the learning rule for a set or parameters. - - This must be called before `update_params` is first called. - - Args: - params: A list of the parameters to be optimised. Note these will - be updated *in-place* to avoid reallocating arrays on each - update. - """ - super(MomentumLearningRule, self).initialise(params) - self.moms = [] - for param in self.params: - self.moms.append(np.zeros_like(param)) - - def reset(self): - """Resets any additional state variables to their intial values. - - For this learning rule this corresponds to zeroing all the momenta. - """ - for mom in zip(self.moms): - mom *= 0. - - def update_params(self, grads_wrt_params): - """Applies a single update to all parameters. - - All parameter updates are performed using in-place operations and so - nothing is returned. - - Args: - grads_wrt_params: A list of gradients of the scalar loss function - with respect to each of the parameters passed to `initialise` - previously, with this list expected to be in the same order. - """ - for param, mom, grad in zip(self.params, self.moms, grads_wrt_params): - mom *= self.mom_coeff - mom -= self.learning_rate * grad - param += mom diff --git a/mlp/models.py b/mlp/models.py deleted file mode 100644 index 7f1273e..0000000 --- a/mlp/models.py +++ /dev/null @@ -1,145 +0,0 @@ -# -*- coding: utf-8 -*- -"""Model definitions. - -This module implements objects encapsulating learnable models of input-output -relationships. The model objects implement methods for forward propagating -the inputs through the transformation(s) defined by the model to produce -outputs (and intermediate states) and for calculating gradients of scalar -functions of the outputs with respect to the model parameters. -""" - -from mlp.layers import LayerWithParameters, StochasticLayer, StochasticLayerWithParameters - - -class SingleLayerModel(object): - """A model consisting of a single transformation layer.""" - - def __init__(self, layer): - """Create a new single layer model instance. - - Args: - layer: The layer object defining the model architecture. - """ - self.layer = layer - - @property - def params(self): - """A list of all of the parameters of the model.""" - return self.layer.params - - def fprop(self, inputs): - """Calculate the model outputs corresponding to a batch of inputs. - - Args: - inputs: Batch of inputs to the model. - - Returns: - List which is a concatenation of the model inputs and model - outputs, this being done for consistency of the interface with - multi-layer models for which `fprop` returns a list of - activations through all immediate layers of the model and including - the inputs and outputs. - """ - activations = [inputs, self.layer.fprop(inputs)] - return activations - - def grads_wrt_params(self, activations, grads_wrt_outputs): - """Calculates gradients with respect to the model parameters. - - Args: - activations: List of all activations from forward pass through - model using `fprop`. - grads_wrt_outputs: Gradient with respect to the model outputs of - the scalar function parameter gradients are being calculated - for. - - Returns: - List of gradients of the scalar function with respect to all model - parameters. - """ - return self.layer.grads_wrt_params(activations[0], grads_wrt_outputs) - - def __repr__(self): - return 'SingleLayerModel(' + str(self.layer) + ')' - - -class MultipleLayerModel(object): - """A model consisting of multiple layers applied sequentially.""" - - def __init__(self, layers): - """Create a new multiple layer model instance. - - Args: - layers: List of the the layer objecst defining the model in the - order they should be applied from inputs to outputs. - """ - self.layers = layers - - @property - def params(self): - """A list of all of the parameters of the model.""" - params = [] - for layer in self.layers: - if isinstance(layer, LayerWithParameters) or isinstance(layer, StochasticLayerWithParameters): - params += layer.params - return params - - def fprop(self, inputs, evaluation=False): - """Forward propagates a batch of inputs through the model. - - Args: - inputs: Batch of inputs to the model. - - Returns: - List of the activations at the output of all layers of the model - plus the inputs (to the first layer) as the first element. The - last element of the list corresponds to the model outputs. - """ - activations = [inputs] - for i, layer in enumerate(self.layers): - if evaluation: - if issubclass(type(self.layers[i]), StochasticLayer) or issubclass(type(self.layers[i]), - StochasticLayerWithParameters): - current_activations = self.layers[i].fprop(activations[i], stochastic=False) - else: - current_activations = self.layers[i].fprop(activations[i]) - else: - if issubclass(type(self.layers[i]), StochasticLayer) or issubclass(type(self.layers[i]), - StochasticLayerWithParameters): - current_activations = self.layers[i].fprop(activations[i], stochastic=True) - else: - current_activations = self.layers[i].fprop(activations[i]) - activations.append(current_activations) - return activations - - def grads_wrt_params(self, activations, grads_wrt_outputs): - """Calculates gradients with respect to the model parameters. - - Args: - activations: List of all activations from forward pass through - model using `fprop`. - grads_wrt_outputs: Gradient with respect to the model outputs of - the scalar function parameter gradients are being calculated - for. - - Returns: - List of gradients of the scalar function with respect to all model - parameters. - """ - grads_wrt_params = [] - for i, layer in enumerate(self.layers[::-1]): - inputs = activations[-i - 2] - outputs = activations[-i - 1] - grads_wrt_inputs = layer.bprop(inputs, outputs, grads_wrt_outputs) - if isinstance(layer, LayerWithParameters) or isinstance(layer, StochasticLayerWithParameters): - grads_wrt_params += layer.grads_wrt_params( - inputs, grads_wrt_outputs)[::-1] - grads_wrt_outputs = grads_wrt_inputs - return grads_wrt_params[::-1] - - def __repr__(self): - return ( - 'MultiLayerModel(\n ' + - '\n '.join([str(layer) for layer in self.layers]) + - '\n)' - ) diff --git a/mlp/optimisers.py b/mlp/optimisers.py deleted file mode 100644 index 8ab313a..0000000 --- a/mlp/optimisers.py +++ /dev/null @@ -1,148 +0,0 @@ -# -*- coding: utf-8 -*- -"""Model optimisers. - -This module contains objects implementing (batched) stochastic gradient descent -based optimisation of models. -""" - -import time -import logging -from collections import OrderedDict -import numpy as np -import tqdm - -logger = logging.getLogger(__name__) - - -class Optimiser(object): - """Basic model optimiser.""" - - def __init__(self, model, error, learning_rule, train_dataset, - valid_dataset=None, data_monitors=None, notebook=False): - """Create a new optimiser instance. - - Args: - model: The model to optimise. - error: The scalar error function to minimise. - learning_rule: Gradient based learning rule to use to minimise - error. - train_dataset: Data provider for training set data batches. - valid_dataset: Data provider for validation set data batches. - data_monitors: Dictionary of functions evaluated on targets and - model outputs (averaged across both full training and - validation data sets) to monitor during training in addition - to the error. Keys should correspond to a string label for - the statistic being evaluated. - """ - self.model = model - self.error = error - self.learning_rule = learning_rule - self.learning_rule.initialise(self.model.params) - self.train_dataset = train_dataset - self.valid_dataset = valid_dataset - self.data_monitors = OrderedDict([('error', error)]) - if data_monitors is not None: - self.data_monitors.update(data_monitors) - self.notebook = notebook - if notebook: - self.tqdm_progress = tqdm.tqdm_notebook - else: - self.tqdm_progress = tqdm.tqdm - - def do_training_epoch(self): - """Do a single training epoch. - - This iterates through all batches in training dataset, for each - calculating the gradient of the estimated error given the batch with - respect to all the model parameters and then updates the model - parameters according to the learning rule. - """ - with self.tqdm_progress(total=self.train_dataset.num_batches) as train_progress_bar: - train_progress_bar.set_description("Epoch Progress") - for inputs_batch, targets_batch in self.train_dataset: - activations = self.model.fprop(inputs_batch) - grads_wrt_outputs = self.error.grad(activations[-1], targets_batch) - grads_wrt_params = self.model.grads_wrt_params( - activations, grads_wrt_outputs) - self.learning_rule.update_params(grads_wrt_params) - train_progress_bar.update(1) - - def eval_monitors(self, dataset, label): - """Evaluates the monitors for the given dataset. - - Args: - dataset: Dataset to perform evaluation with. - label: Tag to add to end of monitor keys to identify dataset. - - Returns: - OrderedDict of monitor values evaluated on dataset. - """ - data_mon_vals = OrderedDict([(key + label, 0.) for key - in self.data_monitors.keys()]) - for inputs_batch, targets_batch in dataset: - activations = self.model.fprop(inputs_batch, evaluation=True) - for key, data_monitor in self.data_monitors.items(): - data_mon_vals[key + label] += data_monitor( - activations[-1], targets_batch) - for key, data_monitor in self.data_monitors.items(): - data_mon_vals[key + label] /= dataset.num_batches - return data_mon_vals - - def get_epoch_stats(self): - """Computes training statistics for an epoch. - - Returns: - An OrderedDict with keys corresponding to the statistic labels and - values corresponding to the value of the statistic. - """ - epoch_stats = OrderedDict() - epoch_stats.update(self.eval_monitors(self.train_dataset, '(train)')) - if self.valid_dataset is not None: - epoch_stats.update(self.eval_monitors( - self.valid_dataset, '(valid)')) - return epoch_stats - - def log_stats(self, epoch, epoch_time, stats): - """Outputs stats for a training epoch to a logger. - - Args: - epoch (int): Epoch counter. - epoch_time: Time taken in seconds for the epoch to complete. - stats: Monitored stats for the epoch. - """ - logger.info('Epoch {0}: {1:.1f}s to complete\n {2}'.format( - epoch, epoch_time, - ', '.join(['{0}={1:.2e}'.format(k, v) for (k, v) in stats.items()]) - )) - - def train(self, num_epochs, stats_interval=5): - """Trains a model for a set number of epochs. - - Args: - num_epochs: Number of epochs (complete passes through trainin - dataset) to train for. - stats_interval: Training statistics will be recorded and logged - every `stats_interval` epochs. - - Returns: - Tuple with first value being an array of training run statistics - and the second being a dict mapping the labels for the statistics - recorded to their column index in the array. - """ - start_train_time = time.time() - run_stats = [list(self.get_epoch_stats().values())] - with self.tqdm_progress(total=num_epochs) as progress_bar: - progress_bar.set_description("Experiment Progress") - for epoch in range(1, num_epochs + 1): - start_time = time.time() - self.do_training_epoch() - epoch_time = time.time()- start_time - if epoch % stats_interval == 0: - stats = self.get_epoch_stats() - self.log_stats(epoch, epoch_time, stats) - run_stats.append(list(stats.values())) - progress_bar.update(1) - finish_train_time = time.time() - total_train_time = finish_train_time - start_train_time - return np.array(run_stats), {k: i for i, k in enumerate(stats.keys())}, total_train_time - diff --git a/mlp/schedulers.py b/mlp/schedulers.py deleted file mode 100644 index 4f53e7e..0000000 --- a/mlp/schedulers.py +++ /dev/null @@ -1,34 +0,0 @@ -# -*- coding: utf-8 -*- -"""Training schedulers. - -This module contains classes implementing schedulers which control the -evolution of learning rule hyperparameters (such as learning rate) over a -training run. -""" - -import numpy as np - - -class ConstantLearningRateScheduler(object): - """Example of scheduler interface which sets a constant learning rate.""" - - def __init__(self, learning_rate): - """Construct a new constant learning rate scheduler object. - - Args: - learning_rate: Learning rate to use in learning rule. - """ - self.learning_rate = learning_rate - - def update_learning_rule(self, learning_rule, epoch_number): - """Update the hyperparameters of the learning rule. - - Run at the beginning of each epoch. - - Args: - learning_rule: Learning rule object being used in training run, - any scheduled hyperparameters to be altered should be - attributes of this object. - epoch_number: Integer index of training epoch about to be run. - """ - learning_rule.learning_rate = self.learning_rate diff --git a/msd10_network_trainer.py b/msd10_network_trainer.py new file mode 100644 index 0000000..3e43939 --- /dev/null +++ b/msd10_network_trainer.py @@ -0,0 +1,181 @@ +import argparse +import numpy as np +import tensorflow as tf +import tqdm +from data_providers import MSD10GenreDataProvider +from network_builder import ClassifierNetworkGraph +from utils.parser_utils import ParserClass +from utils.storage import build_experiment_folder, save_statistics, get_best_validation_model_statistics + +tf.reset_default_graph() # resets any previous graphs to clear memory +parser = argparse.ArgumentParser(description='Welcome to CNN experiments script') # generates an argument parser +parser_extractor = ParserClass(parser=parser) # creates a parser class to process the parsed input + +batch_size, seed, epochs, logs_path, continue_from_epoch, tensorboard_enable, batch_norm, \ +strided_dim_reduction, experiment_prefix, dropout_rate_value = parser_extractor.get_argument_variables() +# returns a list of objects that contain +# our parsed input + +experiment_name = "experiment_{}_batch_size_{}_bn_{}_mp_{}".format(experiment_prefix, + batch_size, batch_norm, + strided_dim_reduction) +# generate experiment name + +rng = np.random.RandomState(seed=seed) # set seed + +train_data = MSD10GenreDataProvider(which_set="train", batch_size=batch_size, rng=rng, random_sampling=True) +val_data = MSD10GenreDataProvider(which_set="valid", batch_size=batch_size, rng=rng) +test_data = MSD10GenreDataProvider(which_set="test", batch_size=batch_size, rng=rng) +# setup our data providers + +print("Running {}".format(experiment_name)) +print("Starting from epoch {}".format(continue_from_epoch)) + +saved_models_filepath, logs_filepath = build_experiment_folder(experiment_name, logs_path) # generate experiment dir + +# Placeholder setup +data_inputs = tf.placeholder(tf.float32, [batch_size, train_data.inputs.shape[1]], 'data-inputs') +data_targets = tf.placeholder(tf.int32, [batch_size], 'data-targets') + +training_phase = tf.placeholder(tf.bool, name='training-flag') +rotate_data = tf.placeholder(tf.bool, name='rotate-flag') +dropout_rate = tf.placeholder(tf.float32, name='dropout-prob') + +classifier_network = ClassifierNetworkGraph(network_name='FCCClassifier', + input_x=data_inputs, target_placeholder=data_targets, + dropout_rate=dropout_rate, batch_size=batch_size, + n_classes=train_data.num_classes, + is_training=training_phase, augment_rotate_flag=rotate_data, + strided_dim_reduction=strided_dim_reduction, + use_batch_normalization=batch_norm) # initialize our computational graph + +if continue_from_epoch == -1: # if this is a new experiment and not continuation of a previous one then generate a new + # statistics file + save_statistics(logs_filepath, "result_summary_statistics", ["epoch", "train_c_loss", "train_c_accuracy", + "val_c_loss", "val_c_accuracy", + "test_c_loss", "test_c_accuracy"], create=True) + +start_epoch = continue_from_epoch if continue_from_epoch != -1 else 0 # if new experiment start from 0 otherwise +# continue where left off + +summary_op, losses_ops, c_error_opt_op = classifier_network.init_train() # get graph operations (ops) + +total_train_batches = train_data.num_batches +total_val_batches = val_data.num_batches +total_test_batches = test_data.num_batches + +if tensorboard_enable: + print("saved tensorboard file at", logs_filepath) + writer = tf.summary.FileWriter(logs_filepath, graph=tf.get_default_graph()) + +init = tf.global_variables_initializer() # initialization op for the graph + +with tf.Session() as sess: + sess.run(init) # actually running the initialization op + train_saver = tf.train.Saver() # saver object that will save our graph so we can reload it later for continuation of + val_saver = tf.train.Saver() + best_val_accuracy = 0. + best_epoch = 0 + # training or inference + + if continue_from_epoch != -1: + train_saver.restore(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, + continue_from_epoch)) # restore previous graph to continue operations + best_val_accuracy, best_epoch = get_best_validation_model_statistics(logs_filepath, "result_summary_statistics") + print(best_val_accuracy, best_epoch) + + with tqdm.tqdm(total=epochs - start_epoch) as epoch_pbar: + for e in range(start_epoch, epochs): + total_c_loss = 0. + total_accuracy = 0. + with tqdm.tqdm(total=total_train_batches) as pbar_train: + for batch_idx, (x_batch, y_batch) in enumerate(train_data): + iter_id = e * total_train_batches + batch_idx + _, c_loss_value, acc = sess.run( + [c_error_opt_op, losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + # Here we execute the c_error_opt_op which trains the network and also the ops that compute the + # loss and accuracy, we save those in _, c_loss_value and acc respectively. + total_c_loss += c_loss_value # add loss of current iter to sum + total_accuracy += acc # add acc of current iter to sum + + iter_out = "iter_num: {}, train_loss: {}, train_accuracy: {}".format(iter_id, + total_c_loss / (batch_idx + 1), + total_accuracy / ( + batch_idx + 1)) # show + # iter statistics using running averages of previous iter within this epoch + pbar_train.set_description(iter_out) + pbar_train.update(1) + if tensorboard_enable and batch_idx % 25 == 0: # save tensorboard summary every 25 iterations + _summary = sess.run( + summary_op, + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + writer.add_summary(_summary, global_step=iter_id) + + total_c_loss /= total_train_batches # compute mean of los + total_accuracy /= total_train_batches # compute mean of accuracy + + save_path = train_saver.save(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + # save graph and weights + print("Saved current model at", save_path) + + total_val_c_loss = 0. + total_val_accuracy = 0. # run validation stage, note how training_phase placeholder is set to False + # and that we do not run the c_error_opt_op which runs gradient descent, but instead only call the loss ops + # to collect losses on the validation set + with tqdm.tqdm(total=total_val_batches) as pbar_val: + for batch_idx, (x_batch, y_batch) in enumerate(val_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_val_c_loss += c_loss_value + total_val_accuracy += acc + iter_out = "val_loss: {}, val_accuracy: {}".format(total_val_c_loss / (batch_idx + 1), + total_val_accuracy / (batch_idx + 1)) + pbar_val.set_description(iter_out) + pbar_val.update(1) + + total_val_c_loss /= total_val_batches + total_val_accuracy /= total_val_batches + + if best_val_accuracy < total_val_accuracy: # check if val acc better than the previous best and if + # so save current as best and save the model as the best validation model to be used on the test set + # after the final epoch + best_val_accuracy = total_val_accuracy + best_epoch = e + save_path = val_saver.save(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + print("Saved best validation score model at", save_path) + + epoch_pbar.update(1) + # save statistics of this epoch, train and val without test set performance + save_statistics(logs_filepath, "result_summary_statistics", + [e, total_c_loss, total_accuracy, total_val_c_loss, total_val_accuracy, + -1, -1]) + + val_saver.restore(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, best_epoch)) + # restore model with best performance on validation set + total_test_c_loss = 0. + total_test_accuracy = 0. + # computer test loss and accuracy and save + with tqdm.tqdm(total=total_test_batches) as pbar_test: + for batch_idx, (x_batch, y_batch) in enumerate(test_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_test_c_loss += c_loss_value + total_test_accuracy += acc + iter_out = "test_loss: {}, test_accuracy: {}".format(total_test_c_loss / (batch_idx + 1), + total_test_accuracy / (batch_idx + 1)) + pbar_test.set_description(iter_out) + pbar_test.update(1) + + total_test_c_loss /= total_test_batches + total_test_accuracy /= total_test_batches + + save_statistics(logs_filepath, "result_summary_statistics", + ["test set performance", -1, -1, -1, -1, + total_test_c_loss, total_test_accuracy]) diff --git a/msd25_network_trainer.py b/msd25_network_trainer.py new file mode 100644 index 0000000..3e43939 --- /dev/null +++ b/msd25_network_trainer.py @@ -0,0 +1,181 @@ +import argparse +import numpy as np +import tensorflow as tf +import tqdm +from data_providers import MSD10GenreDataProvider +from network_builder import ClassifierNetworkGraph +from utils.parser_utils import ParserClass +from utils.storage import build_experiment_folder, save_statistics, get_best_validation_model_statistics + +tf.reset_default_graph() # resets any previous graphs to clear memory +parser = argparse.ArgumentParser(description='Welcome to CNN experiments script') # generates an argument parser +parser_extractor = ParserClass(parser=parser) # creates a parser class to process the parsed input + +batch_size, seed, epochs, logs_path, continue_from_epoch, tensorboard_enable, batch_norm, \ +strided_dim_reduction, experiment_prefix, dropout_rate_value = parser_extractor.get_argument_variables() +# returns a list of objects that contain +# our parsed input + +experiment_name = "experiment_{}_batch_size_{}_bn_{}_mp_{}".format(experiment_prefix, + batch_size, batch_norm, + strided_dim_reduction) +# generate experiment name + +rng = np.random.RandomState(seed=seed) # set seed + +train_data = MSD10GenreDataProvider(which_set="train", batch_size=batch_size, rng=rng, random_sampling=True) +val_data = MSD10GenreDataProvider(which_set="valid", batch_size=batch_size, rng=rng) +test_data = MSD10GenreDataProvider(which_set="test", batch_size=batch_size, rng=rng) +# setup our data providers + +print("Running {}".format(experiment_name)) +print("Starting from epoch {}".format(continue_from_epoch)) + +saved_models_filepath, logs_filepath = build_experiment_folder(experiment_name, logs_path) # generate experiment dir + +# Placeholder setup +data_inputs = tf.placeholder(tf.float32, [batch_size, train_data.inputs.shape[1]], 'data-inputs') +data_targets = tf.placeholder(tf.int32, [batch_size], 'data-targets') + +training_phase = tf.placeholder(tf.bool, name='training-flag') +rotate_data = tf.placeholder(tf.bool, name='rotate-flag') +dropout_rate = tf.placeholder(tf.float32, name='dropout-prob') + +classifier_network = ClassifierNetworkGraph(network_name='FCCClassifier', + input_x=data_inputs, target_placeholder=data_targets, + dropout_rate=dropout_rate, batch_size=batch_size, + n_classes=train_data.num_classes, + is_training=training_phase, augment_rotate_flag=rotate_data, + strided_dim_reduction=strided_dim_reduction, + use_batch_normalization=batch_norm) # initialize our computational graph + +if continue_from_epoch == -1: # if this is a new experiment and not continuation of a previous one then generate a new + # statistics file + save_statistics(logs_filepath, "result_summary_statistics", ["epoch", "train_c_loss", "train_c_accuracy", + "val_c_loss", "val_c_accuracy", + "test_c_loss", "test_c_accuracy"], create=True) + +start_epoch = continue_from_epoch if continue_from_epoch != -1 else 0 # if new experiment start from 0 otherwise +# continue where left off + +summary_op, losses_ops, c_error_opt_op = classifier_network.init_train() # get graph operations (ops) + +total_train_batches = train_data.num_batches +total_val_batches = val_data.num_batches +total_test_batches = test_data.num_batches + +if tensorboard_enable: + print("saved tensorboard file at", logs_filepath) + writer = tf.summary.FileWriter(logs_filepath, graph=tf.get_default_graph()) + +init = tf.global_variables_initializer() # initialization op for the graph + +with tf.Session() as sess: + sess.run(init) # actually running the initialization op + train_saver = tf.train.Saver() # saver object that will save our graph so we can reload it later for continuation of + val_saver = tf.train.Saver() + best_val_accuracy = 0. + best_epoch = 0 + # training or inference + + if continue_from_epoch != -1: + train_saver.restore(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, + continue_from_epoch)) # restore previous graph to continue operations + best_val_accuracy, best_epoch = get_best_validation_model_statistics(logs_filepath, "result_summary_statistics") + print(best_val_accuracy, best_epoch) + + with tqdm.tqdm(total=epochs - start_epoch) as epoch_pbar: + for e in range(start_epoch, epochs): + total_c_loss = 0. + total_accuracy = 0. + with tqdm.tqdm(total=total_train_batches) as pbar_train: + for batch_idx, (x_batch, y_batch) in enumerate(train_data): + iter_id = e * total_train_batches + batch_idx + _, c_loss_value, acc = sess.run( + [c_error_opt_op, losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + # Here we execute the c_error_opt_op which trains the network and also the ops that compute the + # loss and accuracy, we save those in _, c_loss_value and acc respectively. + total_c_loss += c_loss_value # add loss of current iter to sum + total_accuracy += acc # add acc of current iter to sum + + iter_out = "iter_num: {}, train_loss: {}, train_accuracy: {}".format(iter_id, + total_c_loss / (batch_idx + 1), + total_accuracy / ( + batch_idx + 1)) # show + # iter statistics using running averages of previous iter within this epoch + pbar_train.set_description(iter_out) + pbar_train.update(1) + if tensorboard_enable and batch_idx % 25 == 0: # save tensorboard summary every 25 iterations + _summary = sess.run( + summary_op, + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: True, rotate_data: False}) + writer.add_summary(_summary, global_step=iter_id) + + total_c_loss /= total_train_batches # compute mean of los + total_accuracy /= total_train_batches # compute mean of accuracy + + save_path = train_saver.save(sess, "{}/{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + # save graph and weights + print("Saved current model at", save_path) + + total_val_c_loss = 0. + total_val_accuracy = 0. # run validation stage, note how training_phase placeholder is set to False + # and that we do not run the c_error_opt_op which runs gradient descent, but instead only call the loss ops + # to collect losses on the validation set + with tqdm.tqdm(total=total_val_batches) as pbar_val: + for batch_idx, (x_batch, y_batch) in enumerate(val_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_val_c_loss += c_loss_value + total_val_accuracy += acc + iter_out = "val_loss: {}, val_accuracy: {}".format(total_val_c_loss / (batch_idx + 1), + total_val_accuracy / (batch_idx + 1)) + pbar_val.set_description(iter_out) + pbar_val.update(1) + + total_val_c_loss /= total_val_batches + total_val_accuracy /= total_val_batches + + if best_val_accuracy < total_val_accuracy: # check if val acc better than the previous best and if + # so save current as best and save the model as the best validation model to be used on the test set + # after the final epoch + best_val_accuracy = total_val_accuracy + best_epoch = e + save_path = val_saver.save(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, e)) + print("Saved best validation score model at", save_path) + + epoch_pbar.update(1) + # save statistics of this epoch, train and val without test set performance + save_statistics(logs_filepath, "result_summary_statistics", + [e, total_c_loss, total_accuracy, total_val_c_loss, total_val_accuracy, + -1, -1]) + + val_saver.restore(sess, "{}/best_validation_{}_{}.ckpt".format(saved_models_filepath, experiment_name, best_epoch)) + # restore model with best performance on validation set + total_test_c_loss = 0. + total_test_accuracy = 0. + # computer test loss and accuracy and save + with tqdm.tqdm(total=total_test_batches) as pbar_test: + for batch_idx, (x_batch, y_batch) in enumerate(test_data): + c_loss_value, acc = sess.run( + [losses_ops["crossentropy_losses"], losses_ops["accuracy"]], + feed_dict={dropout_rate: dropout_rate_value, data_inputs: x_batch, + data_targets: y_batch, training_phase: False, rotate_data: False}) + total_test_c_loss += c_loss_value + total_test_accuracy += acc + iter_out = "test_loss: {}, test_accuracy: {}".format(total_test_c_loss / (batch_idx + 1), + total_test_accuracy / (batch_idx + 1)) + pbar_test.set_description(iter_out) + pbar_test.update(1) + + total_test_c_loss /= total_test_batches + total_test_accuracy /= total_test_batches + + save_statistics(logs_filepath, "result_summary_statistics", + ["test set performance", -1, -1, -1, -1, + total_test_c_loss, total_test_accuracy]) diff --git a/network_architectures.py b/network_architectures.py new file mode 100644 index 0000000..aaa29dd --- /dev/null +++ b/network_architectures.py @@ -0,0 +1,146 @@ +import tensorflow as tf +from tensorflow.contrib.layers import batch_norm +from tensorflow.python.ops.nn_ops import leaky_relu + +from utils.network_summary import count_parameters + + +class VGGClassifier: + def __init__(self, batch_size, layer_stage_sizes, name, num_classes, batch_norm_use=False, + inner_layer_depth=2, strided_dim_reduction=True): + + """ + Initializes a VGG Classifier architecture + :param batch_size: The size of the data batch + :param layer_stage_sizes: A list containing the filters for each layer stage, where layer stage is a series of + convolutional layers with stride=1 and no max pooling followed by a dimensionality reducing stage which is + either a convolution with stride=1 followed by max pooling or a convolution with stride=2 + (i.e. strided convolution). So if we pass a list [64, 128, 256] it means that if we have inner_layer_depth=2 + then stage 0 will have 2 layers with stride=1 and filter size=64 and another dimensionality reducing convolution + with either stride=1 and max pooling or stride=2 to dimensionality reduce. Similarly for the other stages. + :param name: Name of the network + :param num_classes: Number of classes we will need to classify + :param num_channels: Number of channels of our image data. + :param batch_norm_use: Whether to use batch norm between layers or not. + :param inner_layer_depth: The amount of extra layers on top of the dimensionality reducing stage to have per + layer stage. + :param strided_dim_reduction: Whether to use strided convolutions instead of max pooling. + """ + self.reuse = False + self.batch_size = batch_size + self.layer_stage_sizes = layer_stage_sizes + self.name = name + self.num_classes = num_classes + self.batch_norm_use = batch_norm_use + self.inner_layer_depth = inner_layer_depth + self.strided_dim_reduction = strided_dim_reduction + self.build_completed = False + + def __call__(self, image_input, training=False, dropout_rate=0.0): + """ + Runs the CNN producing the predictions and the gradients. + :param image_input: Image input to produce embeddings for. e.g. for EMNIST [batch_size, 28, 28, 1] + :param training: A flag indicating training or evaluation + :param dropout_rate: A tf placeholder of type tf.float32 indicating the amount of dropout applied + :return: Embeddings of size [batch_size, self.num_classes] + """ + + with tf.variable_scope(self.name, reuse=self.reuse): + layer_features = [] + with tf.variable_scope('VGGNet'): + outputs = image_input + for i in range(len(self.layer_stage_sizes)): + with tf.variable_scope('conv_stage_{}'.format(i)): + for j in range(self.inner_layer_depth): + with tf.variable_scope('conv_{}_{}'.format(i, j)): + if (j == self.inner_layer_depth-1) and self.strided_dim_reduction: + stride = 2 + else: + stride = 1 + outputs = tf.layers.conv2d(outputs, self.layer_stage_sizes[i], [3, 3], + strides=(stride, stride), + padding='SAME', activation=None) + outputs = leaky_relu(outputs, name="leaky_relu{}".format(i)) + layer_features.append(outputs) + if self.batch_norm_use: + outputs = batch_norm(outputs, decay=0.99, scale=True, + center=True, is_training=training, renorm=False) + if self.strided_dim_reduction==False: + outputs = tf.layers.max_pooling2d(outputs, pool_size=(2, 2), strides=2) + + outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=training) + # apply dropout only at dimensionality + # reducing steps, i.e. the last layer in + # every group + + c_conv_encoder = outputs + c_conv_encoder = tf.contrib.layers.flatten(c_conv_encoder) + c_conv_encoder = tf.layers.dense(c_conv_encoder, units=self.num_classes) + + self.reuse = True + self.variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=self.name) + + if not self.build_completed: + self.build_completed = True + count_parameters(self.variables, "VGGNet") + + return c_conv_encoder, layer_features + + +class FCCLayerClassifier: + def __init__(self, batch_size, layer_stage_sizes, name, num_classes, batch_norm_use=False, + inner_layer_depth=2, strided_dim_reduction=True): + + """ + Initializes a FCC Classifier architecture + """ + self.reuse = False + self.batch_size = batch_size + self.layer_stage_sizes = layer_stage_sizes + self.name = name + self.num_classes = num_classes + self.batch_norm_use = batch_norm_use + self.inner_layer_depth = inner_layer_depth + self.strided_dim_reduction = strided_dim_reduction + self.build_completed = False + + def __call__(self, image_input, training=False, dropout_rate=0.0): + """ + Runs the CNN producing the predictions and the gradients. + :param image_input: Image input to produce embeddings for. e.g. for EMNIST [batch_size, 28, 28, 1] + :param training: A flag indicating training or evaluation + :param dropout_rate: A tf placeholder of type tf.float32 indicating the amount of dropout applied + :return: Embeddings of size [batch_size, self.num_classes] + """ + + with tf.variable_scope(self.name, reuse=self.reuse): + layer_features = [] + with tf.variable_scope('FCCLayerNet'): + outputs = image_input + for i in range(len(self.layer_stage_sizes)): + with tf.variable_scope('conv_stage_{}'.format(i)): + for j in range(self.inner_layer_depth): + with tf.variable_scope('conv_{}_{}'.format(i, j)): + outputs = tf.layers.dense(outputs, units=self.layer_stage_sizes[i]) + outputs = leaky_relu(outputs, name="leaky_relu{}".format(i)) + layer_features.append(outputs) + if self.batch_norm_use: + outputs = batch_norm(outputs, decay=0.99, scale=True, + center=True, is_training=training, renorm=False) + outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=training) + # apply dropout only at dimensionality + # reducing steps, i.e. the last layer in + # every group + + c_conv_encoder = outputs + c_conv_encoder = tf.contrib.layers.flatten(c_conv_encoder) + c_conv_encoder = tf.layers.dense(c_conv_encoder, units=self.num_classes) + + self.reuse = True + self.variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=self.name) + + if not self.build_completed: + self.build_completed = True + count_parameters(self.variables, "FCCLayerNet") + + return c_conv_encoder, layer_features diff --git a/network_builder.py b/network_builder.py new file mode 100644 index 0000000..eed8da7 --- /dev/null +++ b/network_builder.py @@ -0,0 +1,177 @@ +import tensorflow as tf +from network_architectures import VGGClassifier, FCCLayerClassifier + + +class ClassifierNetworkGraph: + def __init__(self, input_x, target_placeholder, dropout_rate, + batch_size=100, n_classes=100, is_training=True, augment_rotate_flag=True, + tensorboard_use=False, use_batch_normalization=False, strided_dim_reduction=True, + network_name='VGG_classifier'): + + """ + Initializes a Classifier Network Graph that can build models, train, compute losses and save summary statistics + and images + :param input_x: A placeholder that will feed the input images, usually of size [batch_size, height, width, + channels] + :param target_placeholder: A target placeholder of size [batch_size,]. The classes should be in index form + i.e. not one hot encoding, that will be done automatically by tf + :param dropout_rate: A placeholder of size [None] that holds a single float that defines the amount of dropout + to apply to the network. i.e. for 0.1 drop 0.1 of neurons + :param batch_size: The batch size + :param num_channels: Number of channels + :param n_classes: Number of classes we will be classifying + :param is_training: A placeholder that will indicate whether we are training or not + :param augment_rotate_flag: A placeholder indicating whether to apply rotations augmentations to our input data + :param tensorboard_use: Whether to use tensorboard in this experiment + :param use_batch_normalization: Whether to use batch normalization between layers + :param strided_dim_reduction: Whether to use strided dim reduction instead of max pooling + """ + self.batch_size = batch_size + if network_name == "VGG_classifier": + self.c = VGGClassifier(self.batch_size, name="classifier_neural_network", + batch_norm_use=use_batch_normalization, num_classes=n_classes, + layer_stage_sizes=[64, 128, 256], strided_dim_reduction=strided_dim_reduction) + elif network_name == "FCCClassifier": + self.c = FCCLayerClassifier(self.batch_size, name="classifier_neural_network", + batch_norm_use=use_batch_normalization, num_classes=n_classes, + layer_stage_sizes=[64, 128, 256], strided_dim_reduction=strided_dim_reduction) + + self.input_x = input_x + self.dropout_rate = dropout_rate + self.targets = target_placeholder + + self.training_phase = is_training + self.n_classes = n_classes + self.iterations_trained = 0 + + self.augment_rotate = augment_rotate_flag + self.is_tensorboard = tensorboard_use + self.strided_dim_reduction = strided_dim_reduction + self.use_batch_normalization = use_batch_normalization + + def loss(self): + """build models, calculates losses, saves summary statistcs and images. + Returns: + dict of losses. + """ + with tf.name_scope("losses"): + image_inputs = self.data_augment_batch(self.input_x) # conditionally apply augmentaions + true_outputs = self.targets + # produce predictions and get layer features to save for visual inspection + preds, layer_features = self.c(image_input=image_inputs, training=self.training_phase, + dropout_rate=self.dropout_rate) + # compute loss and accuracy + correct_prediction = tf.equal(tf.argmax(preds, 1), tf.cast(true_outputs, tf.int64)) + accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) + crossentropy_loss = tf.reduce_mean( + tf.nn.sparse_softmax_cross_entropy_with_logits(labels=true_outputs, logits=preds)) + + # add loss and accuracy to collections + tf.add_to_collection('crossentropy_losses', crossentropy_loss) + tf.add_to_collection('accuracy', accuracy) + + # save summaries for the losses, accuracy and image summaries for input images, augmented images + # and the layer features + if len(self.input_x.get_shape().as_list()) == 4: + self.save_features(name="VGG_features", features=layer_features) + tf.summary.image('image', [tf.concat(tf.unstack(self.input_x, axis=0), axis=0)]) + tf.summary.image('augmented_image', [tf.concat(tf.unstack(image_inputs, axis=0), axis=0)]) + tf.summary.scalar('crossentropy_losses', crossentropy_loss) + tf.summary.scalar('accuracy', accuracy) + + return {"crossentropy_losses": tf.add_n(tf.get_collection('crossentropy_losses'), + name='total_classification_loss'), + "accuracy": tf.add_n(tf.get_collection('accuracy'), name='total_accuracy')} + + def save_features(self, name, features, num_rows_in_grid=4): + """ + Saves layer features in a grid to be used in tensorboard + :param name: Features name + :param features: A list of feature tensors + """ + for i in range(len(features)): + shape_in = features[i].get_shape().as_list() + channels = shape_in[3] + y_channels = num_rows_in_grid + x_channels = int(channels / y_channels) + + activations_features = tf.reshape(features[i], shape=(shape_in[0], shape_in[1], shape_in[2], + y_channels, x_channels)) + + activations_features = tf.unstack(activations_features, axis=4) + activations_features = tf.concat(activations_features, axis=2) + activations_features = tf.unstack(activations_features, axis=3) + activations_features = tf.concat(activations_features, axis=1) + activations_features = tf.expand_dims(activations_features, axis=3) + tf.summary.image('{}_{}'.format(name, i), activations_features) + + def rotate_image(self, image): + """ + Rotates a single image + :param image: An image to rotate + :return: A rotated or a non rotated image depending on the result of the flip + """ + no_rotation_flip = tf.unstack( + tf.random_uniform([1], minval=1, maxval=100, dtype=tf.int32, seed=None, + name=None)) # get a random number between 1 and 100 + flip_boolean = tf.less_equal(no_rotation_flip[0], 50) + # if that number is less than or equal to 50 then set to true + random_variable = tf.unstack(tf.random_uniform([1], minval=1, maxval=3, dtype=tf.int32, seed=None, name=None)) + # get a random variable between 1 and 3 for how many degrees the rotation will be i.e. k=1 means 1*90, + # k=2 2*90 etc. + image = tf.cond(flip_boolean, lambda: tf.image.rot90(image, k=random_variable[0]), + lambda: image) # if flip_boolean is true the rotate if not then do not rotate + return image + + def rotate_batch(self, batch_images): + """ + Rotate a batch of images + :param batch_images: A batch of images + :return: A rotated batch of images (some images will not be rotated if their rotation flip ends up False) + """ + shapes = map(int, list(batch_images.get_shape())) + if len(list(batch_images.get_shape())) < 4: + return batch_images + batch_size, x, y, c = shapes + with tf.name_scope('augment'): + batch_images_unpacked = tf.unstack(batch_images) + new_images = [] + for image in batch_images_unpacked: + new_images.append(self.rotate_image(image)) + new_images = tf.stack(new_images) + new_images = tf.reshape(new_images, (batch_size, x, y, c)) + return new_images + + def data_augment_batch(self, batch_images): + """ + Augments data with a variety of augmentations, in the current state only does rotations. + :param batch_images: A batch of images to augment + :return: Augmented data + """ + batch_images = tf.cond(self.augment_rotate, lambda: self.rotate_batch(batch_images), lambda: batch_images) + return batch_images + + def train(self, losses, learning_rate=1e-3, beta1=0.9): + """ + Args: + losses dict. + Returns: + train op. + """ + c_opt = tf.train.AdamOptimizer(beta1=beta1, learning_rate=learning_rate) + update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # Needed for correct batch norm usage + with tf.control_dependencies(update_ops): + c_error_opt_op = c_opt.minimize(losses["crossentropy_losses"], var_list=self.c.variables, + colocate_gradients_with_ops=True) + + return c_error_opt_op + + def init_train(self): + """ + Builds graph ops and returns them + :return: Summary, losses and training ops + """ + losses_ops = self.loss() + c_error_opt_op = self.train(losses_ops) + summary_op = tf.summary.merge_all() + return summary_op, losses_ops, c_error_opt_op diff --git a/notebooks/.ipynb_checkpoints/Introduction_to_tensorflow-checkpoint.ipynb b/notebooks/.ipynb_checkpoints/Introduction_to_tensorflow-checkpoint.ipynb new file mode 100644 index 0000000..8d25d8c --- /dev/null +++ b/notebooks/.ipynb_checkpoints/Introduction_to_tensorflow-checkpoint.ipynb @@ -0,0 +1,557 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Introduction to TensorFlow\n", + "\n", + "## Computation graphs\n", + "\n", + "In the first semester we used the NumPy-based `mlp` Python package to illustrate the concepts involved in automatically propagating gradients through multiple-layer neural network models. We also looked at how to use these calculated derivatives to do gradient-descent based training of models in supervised learning tasks such as classification and regression.\n", + "\n", + "A key theme in the first semester's work was the idea of defining models in a modular fashion. There we considered models composed of a sequence of *layer* modules, the output of each of which fed into the input of the next in the sequence and each applying a transformation to map inputs to outputs. By defining a standard interface to layer objects with each defining a `fprop` method to *forward propagate* inputs to outputs, and a `bprop` method to *back propagate* gradients with respect to the output of the layer to gradients with respect to the input of the layer, the layer modules could be composed together arbitarily and activations and gradients forward and back propagated through the whole stack respectively.\n", + "\n", + "