diff --git a/Round_6_-_Dimensionality_Reduction.ipynb b/Round_6_-_Dimensionality_Reduction.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..9bc24489d9fda67e7179ce3a4ed3cf034b756864 --- /dev/null +++ b/Round_6_-_Dimensionality_Reduction.ipynb @@ -0,0 +1,1823 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "8a7407672fbc81d46b89f4036e0480ef", + "grade": false, + "grade_id": "cell-50a11b583482297d", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "# Machine Learning with Python - Dimensionality Reduction\n", + "\n", + "Machine learning applications often involve high-dimensional data. Consider a $1000 \\times 1000$ pixel image. If we stack the intensities of pixels into a vector, we obtain a vector of length $10^6$. In this round, we study dimensionality reduction methods that transform long feature vectors to shorter feature vectors which retain most of the relevant information contained in the raw long vectors. Such dimensionality reduction is useful for at least three reasons: \n", + "\n", + "- Shorter feature vectors typically imply **less computations** required by ML methods. \n", + "- Shorter (but informative) feature vectors ensure **good generalization** of ML methods from training data to new data points. Using very long feature vectors bears the risk of overfitting the training data. \n", + "- Transforming long raw vectors (e.g. obtained from pixel intensities of a snahpshot obtained from a megapixel camera) to vectors of length 2 (or 3) allows to **visualize** data points in a scatter plot.\n", + "\n", + "In this notebook, we specifically focus on dimensionality reduction by principal component analysis (PCA). In PCA, the data is linearly transformed into a lower dimensional representation that results in the minimal amount of information loss over the entire dataset." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "841960154221606c5b3503c75c1b59ab", + "grade": false, + "grade_id": "cell-23dd358e94895c83", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "## Learning goals\n", + "\n", + "After this round, you should \n", + "\n", + "- understand the basic idea behind dimensionality reduction. \n", + "- be able to implement PCA using the Python library `scikit-learn`. \n", + "- understand the trade-off between amount of dimensionality reduction and information loss. \n", + "- be able to combine PCA with a supervised ML method such as linear regression. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "fb9546471d5d2dc19a9169338d53716b", + "grade": false, + "grade_id": "cell-9c8204a98447db2b", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "## Additional Material \n", + "* [Video](https://www.youtube.com/watch?v=FgakZw6K1QQ) by StatQuest on PCA\n", + "* [Video lecture](https://www.youtube.com/watch?v=Zbr5hyJNGCs) of Prof. Andrew Ng on dimensionality reduction \n", + "* [Video lecture](https://www.youtube.com/watch?v=cnCzY5M3txk) of Prof. Andrew Ng on dim. red. for data visualization \n", + "* [Video lecture](https://www.youtube.com/watch?v=T-B8muDvzu0) of Prof. Andrew Ng on principal component analysis\n", + "* https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues\n", + "* Chapter 9 of the [Course Book](https://arxiv.org/abs/1805.05052) - Dimensionality Reduction." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "740eb7f88b7bbdc90aacacd3e8f8db54", + "grade": false, + "grade_id": "cell-b119cb801fe9fd35", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "## Bananas and Apples\n", + "\n", + "<a id=\"data\"></a>\n", + "Consider a dataset containing $m=30$ image files of apples and bananas: \n", + "\n", + "* 15 images of apples stored in the files named `1.jpg` to `15.jpg`\n", + "* 15 images of bananas stored in the files named `16.jpg` to `30.jpg`\n", + "\n", + "The files contain color images, but for our purposes we convert them to grayscale images. We can represent each pixel of a grayscale image by a number between 0 (black pixel) and 255 (white pixel). The size of each images is $50\\times50$ pixels. Thus, we can represent each fruit image by the \"raw\" feature vector $\\mathbf{z} =\\big(z_{1},\\ldots,z_{J}\\big)^{T} \\in \\mathbb{R}^{2500}$. The $j$th entry $z_{j}$ is the grayscale level of the $j$th pixel." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "c2c0a22e80127c102c3fc3ad44892c9e", + "grade": false, + "grade_id": "cell-4da09c247a436939", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='handsondata'></a>\n", + "<div class=\" alert alert-info\">\n", + " <b>Demo.</b> Loading the Data. \n", + " \n", + " \n", + "The following code block loads the images, converts them into grayscale images and stores them in the matrix $\\mathbf{Z} \\in \\mathbb{R}^{30 \\times 2500}$ whose $i$th row $\\mathbf{z}^{(i)} \\in \\mathbb{R}^{2500}$ contains the grayscale intensities for the $i$th image. The first three apple images and the first three banana images are displayed.\n", + "\n", + " </div>" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "85ef4be0d441044ab9e1ce9bce9bf15a", + "grade": false, + "grade_id": "importPIL", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "outputs": [], + "source": [ + "# Import required libraries (packages) for this exercise\n", + "from PIL import Image\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "plt.style.use('ggplot') # Change style for nice plots :)" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "a02ae9b0c51a42e28bc777621fb62a74", + "grade": false, + "grade_id": "cell-5cfd1c8afd6117c0", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The shape of the datamatrix Z is (30, 2500)\n" + ] + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAdoAAAItCAYAAACem7uTAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nOy9ebBmVXX+v/06IENDz01PdDfd0MxzdyNKZFJUJEENVhk0pgrNZKxEK0lVtJKYVCVWxZTGpLASEzWCOKEQI0EGQRRkUgahB5qmobuh56YnJokaf3/8ylPP+tz7rvOee+/ufhuez197333ec/aZ9r5nPXut9bJf/vKXvyzGGGOMqcL/29sdMMYYY17MeKI1xhhjKuKJ1hhjjKmIJ1pjjDGmIp5ojTHGmIq8Ym93YBDgwuuXvexl1Y8zlsd45plnmvJTTz0V2h566KGebZs2bQr1V73qVU159uzZoW3evHmhPnfu3KY8YcKE0Pb//p//fzPmpcTPfvazUOcYoONd27Yvf/nLm/LSpUtD29FHH92zD7/4xS9C/ec//3mo6/j2yle+sue27N9+++0X6v/3f//Xsw/c76/wiGiMMcZUxBOtMcYYUxGbjku7GXekJt9aJumvfe1roX7DDTc05Y0bN4a2n/70pz2PP27cuFD/3//936ZM8wl/O2vWrKZ88sknh7YPfvCD6W+NMS8uXnjhhVA/4IADQl1Ns88//3xoe8Ur4jR0/fXXN+VHH300tE2ZMqVnXU3OpeQSFs3KOkapiXk4RjKe+YvWGGOMqYgnWmOMMaYiNh33gZoKRmMO7mKCfvzxx5vyJz/5ydC2bNmyUFcTCVfE6THZ9vTTT4e6rq4bP358aKN5R1c6H3PMMUNPwBjzkmH//fcPdUpPao7dvXt3aPunf/qnUL/pppua8rRp00Lb2WefHerq8UBTMccsHf+4Qlm35djMc+m1sjjDX7TGGGNMRTzRGmOMMRXxRGuMMcZUJNVoxypV7WjcO7QPbf3R44ylG06XPmRkfbr88stD/Rvf+EZT5lJ0ahGqPVCHVX1h0qRJoe3ggw8O9V27djXl5557LrQdeOCBoX7mmWc25fPOOy+08RoNWspjuxsZM7ZwTFJXwVJyjfaLX/xiqB900EFNmWtFVqxYEerZ+pAs4hTJxoSxGL/8RWuMMcZUxBOtMcYYUxFPtMYYY0xFUo12ELSskequIz3GnuqDhhkrpZSvf/3roa4hzZg9IgsRRt1VNY4dO3aEtu3bt4e6ahjcD7P3vP/972/Keyr7kTFmMOE6EtVZSyll586dTZnrSI477rhQ17CKU6dODW1PPPFEqGexCXgc9ZWlXqvbcj/0m3UIRmOMMWbA8ERrjDHGVGSfCsE4CCbK0fRh+fLlTfnTn/50aKM5WE0vDCXGsGRqtuHSeU32zuXudNnRc6Pp+KMf/WioM+SaMealC7P3cMzSOseW+fPnh/qdd97ZlM8444zQpqFfWacrEMdmdTnKTMcc43kuI8FftMYYY0xFPNEaY4wxFfFEa4wxxlRkn9Joa2myDz/8cKh/97vfDfU/+qM/GpM+aJjFcePGhTbqparRMnTitm3bQl21V7oC6dJ06hI//elPQ33u3LlN+SMf+UjP/rQxCFq6MWbPwXGHqeV0/KDbjY47pZRy9dVXN+VVq1aFNo6TDz74YFN+zWteE9pe/epXh7qGlWWbjqEMH0mNNgvJ2Gus8xetMcYYUxFPtMYYY0xFBs50vDfMjn/5l38Z6g888ECo/+IXv2jKf/zHf9z3fjUaSinRPMFoKHTvUTcdzapTytBIJVnEEzWD0C3ooosuCvVjjz225zG6YFOxMS8tOO7QdKzmYm77nve8J9RXr17dlBlBb+HChaE+efLkpswxlfKczi1ZJh/OQTR1OzKUMcYYM2B4ojXGGGMq4onWGGOMqcjAabS0f2fZGbroudm2V111VWi7/fbbQ/2f//mfm/K5554b2qgLKAxLdtRRRzVlhiF79NFHQ131BWoN3K/qu8x2oeHNXve614W20eiwI2U0GvxYPQvGmLGF79/zzz8f6occckhTpivQYYcdFuqf+MQnmvK1114b2j71qU+FurooqvtOKUPDNR5wwAE9+6vbcrwl1Gz7wV+0xhhjTEU80RpjjDEV8URrjDHGVGTgNFqSaW1ddLgu21LL1PBhf/Znfxbavva1rzVl6q7r1q0L9dNPP70pb9myJbSphlFK9H+dMGFCaJsxY0bP3zJEmfruZqHDapJpq13Q31InYQpAY8yeg+91pnNqXIJShmqr6ht71llnhbYbb7wx1HWMffbZZ0MbteCRjh+ahrRt2154dDLGGGMq4onWGGOMqcjAm467MFYmSnLSSScNWy6llN/5nd9pyueff35oU1NxKUOz5WSoCZhL5Tdv3hzqGzZsaMrMSjF+/PimTPM0zczMUpHRxQyt92Ks3HDsvmPM4MD3muZVNb/SpEuTr+5r1qxZoe2YY44J9SeeeKIpa9jaUkp5+umnQ13HQo4f6upYw3XQX7TGGGNMRTzRGmOMMRXxRGuMMcZU5GW/3Fs+H33SRXftEqJPQ24x/CHrK1asaMpPPvlkaLv55pubsroBDdcHhk5U1J2nlLgEnvvhcnNNSUWd9eijj27KGgKylFJmz57dc9slS5aENoZ2NMaYX8Hxi+FddX0K15FQo9XfrlmzJrR98YtfDHUdlxYsWBDaDj300FDXdo7VGRx/s3mol+uPv2iNMcaYiniiNcYYYyriidYYY4ypSOo4OVby7Vj5PLalJ1INdOvWraFt586doa66Jrd96KGHQl3T5qnPKvu0cePG0DZp0qSefX35y18e6rTt67loX4f7rR7nzDPPDG2qSzBEJHWVpUuXDlsuZahGe8EFFzTladOmhbbMp26s0uQR+9Uas/fo8v4xBCP1XF0/c/fdd4e27du3h7rGNeB+ue22bdua8v777x/adIxqW98zEvxFa4wxxlTEE60xxhhTkYF371FoGnjggQdC/d57723KXBbOMIYTJ05sygsXLgxtNIWqKfnKK68MbZs2bWrKNFXoMUqJ4cQY/pCmDG2n2VazW5QSQzLSfK11hoDs4ibEY6rJ5rzzzgttrHcJ7Zhh07HZ19iHhtdRwfOkLKWmWY4HDJV40003NeU77rgjtB1++OGhrm46lOqY9UxD0HJbHatpyuY4yXbF7j3GGGPMXsATrTHGGFMRT7TGGGNMRQYuTV5m69dwh6WU8t3vfjfUVUvVpdzcTylRE6XbC0OEqUZK/XHZsmVNecuWLaFtypQpPevsD3+rYR+fe+650Ma6atd0/VHdlVoDr7VqD9SMGSbtqaeeasorV64Mbay/973vbcqaqqormQ47Vun3jDGjh66YqsvyXX3sscdC/dZbbx32d6XEMLGllPL444/37MO4ceNC/VWvelVT5podHVO5foaMRHf3F60xxhhTEU+0xhhjTEU80RpjjDEVGTg/WoYbVB32f/7nf0Ibbfs7duxoyvTNyjS8tjR0qjdQW9X9Uh+lTqFhFVUvYFspQ3XYXsdknX5cWd+pf6g2QX2D6G8POOCA0EZt+rTTTmvKf/EXfxHa2o5jzL7MgA2v1WhLk6fj0BNPPBHa7rrrrlD/3ve+15SPOeaY0EZ/fh3vqK3OnTs31NWvNtOQmUKvLf2pwjmg6WfPXxhjjDFm1HiiNcYYYyqy1917aFpZvnx5qN9yyy1NmWEV6XKSmQZo3tRsPjR78PNf2+kKpPuh2ZtmXN0vzRyZSZqhJ7tkl9A+ZNmESJuZWdvbsir9+Mc/bspf//rXQ9ull17a83ddXHbs3mMGkS6m4y7Pd5f98P3MsmmxrjIa96v1XqEHh9svQ9X+8Ic/DHUNictxh2Fk6Yqp7N69O9RVruP4m4WJpZRI2a8f/EVrjDHGVMQTrTHGGFMRT7TGGGNMRcZMox2pRrZ169ZQv+aaa0J96dKlTZkuL9QctQ9MLUfUJk8NlJrGM88805QzjYBt3K/qGLrPtm3Zn+za7rfffqGu50ntmcfU60e9mdtqiEaeC/Ua1WQ0BVYp0fWnlFJOPPHEpmxN1ryU6PJMZ+s02vRcHU/4rmb1bC1G2zoNTeW5atWq0EZXoKz/1GT1t9wPdVcd/3iembvnWIwt/qI1xhhjKuKJ1hhjjKmIJ1pjjDGmImOm0XbR01Tvu+eee0Lbk08+Geqq/9E+z5RrmS8X/UI15CH1BfpN6XF37doV2lSX5X6oc+o14rbUT1UDpfZL7UF12UyHZX9IF99d7T/7wxRUytq1a0P9+uuvD/Vjjz22536z/hgziIxGd+2y337bRrMt+8cxItuvhl1ct25daJs4cWKo6zhOn1XWdWxu2zabHzKNts1HuB/8RWuMMcZUxBOtMcYYU5E9EoKRn+KaZYdhFdVkWkr8bJ8+fXpoO+SQQ0JdTRncliZfJTNtl1LKtm3bhu0P95uFLyslmm7bstbotnSBoguPmk+yzEPZMQjPM7tGbaYVNdPTRevee+8Ndb3Whx56aGjL3BKM2RcYTVjRkbZRTsokIpK9Z7ofngfH2/Xr1zdlyngzZswI9S7jpI6FmWtjKf2fS5s5fSQZmTxaGWOMMRXxRGuMMcZUxBOtMcYYU5G9kiZPl3ozZZLa8kuJ+gJT3VGjzdI2TZs2LdQ1XBd1TNUUSynloIMOasrUGPW3bGNIMNVPVYtk30uJ2knm5sL9UqNVPYFaDbfVPmSpAkmbzqPH4X42bNgQ6urudeGFF/bsH3FIRrMvkumcY7HP4dCxke9u9i5naT/p0vfwww+Huq4zYYo6umk+/fTTPdt4HF3TQ3cejmHZ9dXz5nwwFmOLv2iNMcaYiniiNcYYYyqS2iT5yTxWZg41I9CEStOAuq7QFMs+qBmB23IJuZpjeUyaT9ScPXny5NCm5pQsalQp+bJ61nWJu5qu2XduO1YZQOj6QzNMth+amDLzOqN93XnnnU3513/913v2bzRL8G1WNnuK0bjzZJHkSLbtaMYErXM/ehzNzlNKKY899lio61hICZD71fGN4wPHITUX070nMyVn59KG3XuMMcaYAcMTrTHGGFMRT7TGGGNMRTq596j21sXurxl4SonuPdTsqFWqa8jOnTtDG8Msqo1+5syZoY2uQPpbLjenvvvQQw81ZeqPu3fvbsoMH8lze/bZZ5syNYFMd20j01Eyt5wsI0+mz5cS+8tz4X51W7ZRx161alXP/mZrBKy7mkGk7bns97ntsp9s7UUp+XuehSnku6rj2YoVK0Ibx2pdZ0LtlGO+arhsY0jGLHsPx/Fs/soYi7HFX7TGGGNMRTzRGmOMMRXxRGuMMcZUJNVoM1t/mw6g7U899VRoU99ZhjtkXfdDvZGagWqt9HdlCMYJEyY0ZfqpagiwUkqZNWtWU964cWNoU72DfWd/dVvqB/TlzfziGDqx1+94HPaPZH6q/aaYavstdWyei4bk3LJlS2ibOnVqzz4YM4jsqbUDXWIcqO7ZttZGt+WaE42HQL9Z7ufggw9uyhzPOBbqGMG1KvSr1fUf3A/XvWid446OUW3rP+xHa4wxxgwYnmiNMcaYinRy7xmpGYRLtNUEQdMAP+nVzKDLyUsZappds2ZNU6Y5mOi5tLkYqTlz06ZNoU2XsWfL30vpZsbNTLU0ieg1o1kmCy2WmXjZxvBmeo3aTC26X14jnovul25hNh2bfQ2OUXRBycZUfa/b3jGtcwzlMbPsPXzvdXxmZrUHHnhg2H2WUsqkSZNCXcMf8pg8FzUPM1wjj6PXl2MUyVwd9TiUt8bCldBftMYYY0xFPNEaY4wxFfFEa4wxxlSkU5q8kWq0tKurhkC7Oe3j6hpELXX16tWhru49N998c2j7yU9+EupTpkxpygsXLkz7oEvXly5d2rN/1B+7uOVQq9Tf8rpnKet4TD2XTKMoJerG2T1jf9nG66fHzVLolRJ17SwM5Vg9m8bUhC4nRJ/jtvR2ShY+lZpstgaFa1k4hi1btqwpP/LII6FN32uGRqReqmMWxwCGwNXfZmtVSonXN1vvwf5mKfR4TO6Hx+kHf9EaY4wxFfFEa4wxxlTEE60xxhhTkTELwZhpZNTsVE+gDpf5WWpKOvanlKGhHpUdO3aEuvrD0j8sCzXGsIDq65mF9Solv0aZFpGFRWvbT+avm+mcbf5+epzMt420+fjpuWV+cV1CgBozKGTPabYWo+151v1yfGDYQt2W4yLXvTz66KNNmWOz7pdjPNeRZOFnOQZk8wPHmvHjxzdlXiOONXpduB/tE9vYB/vRGmOMMQOGJ1pjjDGmInskBGMW0opmDpoRFJpdaMpQMwKhKUP3RZM0UdMB+5eZHLqEP6R5h9l8lMwknYVgbMs6kbkN8T7ptuw7Tcfa38xMxHpbSDXFpmLTxkiyroyWzC2OZHJcW4YsfT9p6uR7pHLSypUrQ5uaisnEiRNDXc8lk7NKiWNAm3tMF2lMxyGeN+s69tClSNt4H0bizkP8RWuMMcZUxBOtMcYYUxFPtMYYY0xFRm987gOG+VL7eJsOp/Zx7oc2+Cy02CGHHBLqqnEwFRO1X61nmiLdgqhHZm4vPGYWWixzbaGGofopj0GdJ9NcMp0i06JLydMDsr96LzRMJrE7j9kXaHt3s/UVmVbJbXUs5JqJJ554ItRXrVrVlDUF6HD9zVJnqj7K9R104dHz5Las67acH6it6nkzrSbHloMPPrgpc37Ixo9s7Uq/+IvWGGOMqYgnWmOMMaYinmiNMcaYiuwRjTYLm0Udkz63SpvfrGoItKvTn001jkyTLSXqANQIVJft4iNKMm2E1y/TJ3neqvNkPsqlxPPkMbP+ZWEzWafuxPty7LHH9tyPYk3W7Itk/rBd0uRlY4tqsMPVN2zY0JSppXJ8y/qgYwD7Q2012zbzW20LY6u6LNfscO2NnlsW8pZjFLdtS903HP6iNcYYYyriidYYY4ypyIhNx13CmfFzf86cOU35wQcfDG10gdHPeJo1uK2aQTLzbynx8//ZZ59N+6suKdxPRpbxhuaTLmEViZpfsxBlvGeZCaftmEqbaUWvH03FZMmSJT3bsuxRxrSxN0IwdjEHZ3A827ZtW6irmw7HVKJmXb5HHFPVjEtZSt9zmltHM/ZlkhHHar0udNmZNGlSqPebDYz9aTu3ftr8RWuMMcZUxBOtMcYYUxFPtMYYY0xFUo02cyMZjUY2f/78pkx7ONPDqX2cGgFt/Wq/b0t1pMdpS/GkuiK1Ev1tWwg1bWd/qF1qGDWGdsz0U7Zl4cMyvbTNPSojS8tF6HJ0wgkn9Nw2SxtmzdYMIm2arL6vHAv1naMmu2zZslBXF55x48alx8zSxREd77LwqW1hFTN3RboY6fhBzZh90DGW5816v245HEu6pDrshb9ojTHGmIp4ojXGGGMqkn4D1zLHHXrooU2Z0Z127NgR6moa4NLuzORL8yVNEFlkKGa/UHMF23bu3NlzP5nZiOYI1p9++umev+V90XOl+5H2ib+jmVlNLbzWWdQrmmTYBzUj8ZhTp04N9QULFpR+sKl436aLm0smW3TJKsVj6vjBNpozFY4lfBaz3xJ9V9avXx/aHn300aa8adOm0MaxT7OTdcmOw3Phe64ZbzKT6sSJE0Pb7t27Sy845vN6qVSWyU6llDJjxoymTPcenos+K9nzx/PkmD+SscdftMYYY0xFPNEaY4wxFfFEa4wxxlRkxO49o0H1hJkzZ4a2devWhbra2WmvZ/9Ub6D20CVbA/VJ1Xoy7bQLbe4yesw2bVU1jWxbajXUMDQTBrMoZRl62s5Frz3vy5lnnhnqDJtmXpzwOdBnk89wpjFyDFDNk8dgNpcsm1WW7SvLZMU+8b3ZsmVLqK9evbopa1adUoaOQwp1zSwLUObix22zULW8Xqpdbt26NbRx/FD9lMfctWtXqOu5cD90R1JtmNtm12hP4y9aY4wxpiKeaI0xxpiKeKI1xhhjKrJH/GhpG1fN4NRTTw1tP/7xj0NdNVFqIfRv0rCKPCZ/qzoBbfmZjyv3q/XMT69tPzyXLJwkf6v75baqaVB/oYal+1W9tpT8+rWl1MvCaL73ve9Nf9sLh2Dct8n8svneZBpt5sPK95rPafYMd9E5uW5Dddgnn3wytNEfVuuZPz99RLP3kdeP15papsLf6nvG8UPrfB8z/TtLdVdKXMNDn9vJkyeHuvr+Z/eIfeziRzsW+IvWGGOMqYgnWmOMMaYindIQ6Od2mzmz30w/r3nNa0L9mmuuCXU1M2i4w1KGmjl02X2WZYf9pamYJtUsk45uy2PQ1JKZK+i6pH2iGSsziTH0pPahS0g6mqZY123bTO9q0nn7298e2o488sgyEmwqfvHSdm/12cvMzHxmszGrzc1FzcNPPfVUaKNL4saNG5syTcfsg/aRJl2+y736TtpCQGbjeJesO7ofDdVYytAxQKUo3jO67Kh5mCFaMxN6m/k/IzM7j8VY4y9aY4wxpiKeaI0xxpiKeKI1xhhjKtI9VXwPxkozu+CCC0L9s5/9bFOm9pCFZKTGQnu92vbbQqqpDpudJ3UJbqt19i9L+UfXH+rPel1Go1Mo7HvWP0L9SJfgX3rppelxzEsDdcUrJT4HWeq7UvrXVvke0+VE+8D1H9Rh165d25Qff/zx0Eb3HtX72lxt9N3lu6D947lwLNQxInOtKSWuM+H147XXd5easZ4nrwHXsijTpk0L9dmzZ4e66rLUb7NQum0hFrN1Q1lYzy6uXr3wF60xxhhTEU+0xhhjTEU80RpjjDEV6aTRZnbskWpt/N2SJUtC/ZZbbmnK999/f2ijrVy1y8yWz3aGBKM/m+o+1FYzMr9V7ifzjc00We6L+kzmG5tpym16h2o5befyxje+sSkvXLgw3a+SPWMOwbhvk+lcWejSUuKzl4Vy5HPJVG6PPPJIU3744YdD2/bt20NdNUeOF3zeVdfk+gqShYHU95XH4Hln/qSZTzw1Wd4XvRcMy6rXd8eOHaGN5z1jxoymPG/evNB26KGHhrpqwW3vdb+6aynxvLN0ojXS6fmL1hhjjKmIJ1pjjDGmIqnpeKRhFdv202ufpQw1215yySVNmaHOaFLNlsPT/KSmAob12rVrV6jrvhieMQv5RhOOnmtb2MLM3YF90OPQ5Uk54IADeh6DdR4jC+vG/SxYsCDUP/axj/XclmTh4bJjmn0LPk+ZrEJTst57tm3btq0pb9iwIbTde++9oa7jyfr163seo5QYYpBjVGYe5ntNMjOumod5vTL3O14/uuVkptksjCzHW60zqw7NwXPnzu25Lfun16QtI1rmCpaNEVkWtszkPFL8RWuMMcZUxBOtMcYYUxFPtMYYY0xFXvbLDmuZ93RWenLDDTeE+mc+85lQ1yX51BgZEky1k7Yl7qpFZNeAegz1BO0T7f6ZLpWlqCsl13NV22kLS6m/5blkOsqUKVNC25VXXhnqmgrP2qopZWgIRn32+IzwWdR3kC47d955Z1O+6667QhvdA1V3nThxYmjjeoZMS+W2qtm26XvZeWcp6rIxi+NXFz2cY4K+9xwDtA8nnHBCaFN3nlKGroPJjtnlWdBzbQs3q/vNNNq29IoZvbb1F60xxhhTEU+0xhhjTEU80RpjjDEVSf1oR5pijYxGl1Pb+bnnnhvali5dGurf+c53mnJbGDL1UaPvKfXdLHSX6rdZyi62Z8coJU8FloWky8LXUbvhuWT7ya7fn//5n4c21WTZ/7Ywadmzku2nS0hG68R7n2wNAO+P+saWEt+5m266KbT99V//dVM+9thjQxt1WH2Gs5B8bX1naEIN0cg0b0TPm2EV9ZnmGMVttU5/18zXtG0cUh2b7/WsWbOaMtdpdPFF5bik5833mttmOnZG1r9Mv+16nOY3nX9hjDHGmL7xRGuMMcZUJDUdj0XoqbGEJpAPfvCDoa7mzuuuuy60cYm7bsv90lSgrkGZGabNxKtmjyzrCPuQuTeUEs0g3I8es82TS03dbWbmT3ziE035oosuSvebZWrpQvY82hy8b0GXmEwqoWvev/7rvzblz3/+86HtrW99a1OmC1FmJs1cYErJXfwylzrCPmgfOSZkbkLcj44RbdmP9F1hOESGT9VMO1OnTg1tavZuc4npkhlMGY0kNNIsPDXGksGaSY0xxpgXGZ5ojTHGmIp4ojXGGGMq0ikE496g37RppUSN8VOf+lRoU9efUkrZuXNnU6aG0ZbaSlHtJgu5WEru3kO4LyVzBeqiL2Rh3A455JDQ9vGPfzzU3/GOd/Q8Zhddxbw04fO9ZcuWnm2XX355qOv6AIb+Uz2Xmifde7JQfxkMRcjwgqo/Uydmn9T9h65A+p7TZYe6tY591F153uqWM2fOnNA2ffr0UNdxoC1VX9bWZYzqEup3b7jxjWS//qI1xhhjKuKJ1hhjjKmIJ1pjjDGmIqkf7d5gNPqeaggf+tCHQht1i6uvvropb968ObRRD8k0g8yvNvNfI1kbQzlmYci6pNDjfjXc2le/+tXQduqpp4b63k6ZaPZt+Jyqj+att94a2r72ta+Fuj7vfMfGjx/flKljZv6lWTjSUqIum/nkl1LK008/3ZT1nRpuv/oObtq0KbTpOERtlyEPVYdV39dSSpk5c2aoH3744U2ZazH4Xus1ysahNl/5sUg113U/g4S/aI0xxpiKeKI1xhhjKjLw7j1dyMK48TSvueaapvztb387tK1atSrU1QRFtxw1Ke3atSu0cWm/1tm/LPMPzU1079FsIZkJR81qpZSyePHiUL/sssuaMk3tWbaLtv7pb9tCtY0UuxTtW2RuLw8++GBou/HGG0Nd39277747tDFjjzJhwoRQ1xCHdK2hy44+43zWaFLNXFnoGqTnzXdX+3DYYYeFtuOPPz7U1U2H55lJTTSDZ+NHNmaNJszqaEL91pi+2sYOu/cYY4wxA4YnWmOMMaYinmiNMcaYiuzTGm2XrtMNQJfOL1u2LLTdcMMNob569eqmrKEbS4la0/bt20Mb67otNSp1CSgl6hZMKTZt2rRQV62Eesf8+fOb8qJFi0Lba17zmlBXl4Fx48aFtv333z/UVVuizqPaF9vb9FzFOtTobX0AACAASURBVOuLF+qaL7zwQlOmKwvdZ6688sqmfNttt4W2W265pSnzOdQwj6XEZ5zrF/hM63PK/TLEoWqk1Hq59kHDIc6ePbtnG1PU0S1H3yu+U9n6D0INuV94jD3lopOl3xupC6I1WmOMMWYfwxOtMcYYU5F92nRMslNhm7rsMIJMlnUncxvi7+gm9OijjzZlmlC5fF/NuDTb0pSsZi62ZaYgLslXk0jmElBKvrQ/y7DRxb3HpuMXL1mGLEZm47ZqNl25cmVo03ds6dKloe2RRx4JdX1v+Ayzrttm2XBKiWZemoPpRqR1mqS1zv5kLkZ8x7KsO22Zt/Q9z0yzbS46XVx4umT6qWE6bmMk7kj+ojXGGGMq4onWGGOMqYgnWmOMMaYiA6fR7qlQenqcTJdge6YxZtpkl/6M5bbZ7zJ9Jgu5OJo+tGFd9qXBjh07Ql2z3NDVjS4yqkdmof94DL7X6lKk5VLy0ImZq1sp8V3he8I1FLotj6lrPvj+ta2LyBirsSZr6zJO8r5kZGNsF422bb8Z1miNMcaYAcMTrTHGGFMRT7TGGGNMRQZOo81o6+pLRd/romNnWvRYacqDeN0HvX8vdajLqR5JzbOLP6RqrfQh7xImcKxoO8ZIh99+3/mu246ULrrlWF73vTF9OQSjMcYYM2B4ojXGGGMq8oqscTTLo19Mprsa5p3RkJmGupjZxqp/ozHfjJW5eh9SQIwxLzH8RWuMMcZUxBOtMcYYUxFPtMYYY0xFUvceNv3iF78Y0UEYLiwLuZVpb+xPFn6tLayiLkfneWX75X6ybZk2j24LGS8mzXFPuBd0YSQh1MzYMpoUZoO+/mOsQv+N9BoN4thR4zxH+9te+6nxTHnEMcYYYyriidYYY4ypiCdaY4wxpiKpRkvtUvXJtjRN+tuf//znoU1Do7GN6aoOPPDAXt0L6bLIK1/5yp5tPO4rXhHdidkH7S8vl/aBx+wSHo4Mos4yUgZNT7JGu29RK91ZLbr4ew+CL7syVun2avm5Z/3L1uF06UONe+YRxxhjjKmIJ1pjjDGmIp3ce9RdhWbSzO2FbbrfzD1muD70C03S7IP2n9tmbjiZeYL7oUm6yxJym47rYdPx3qfWfc/CkY7VczgI7kejuX5d+lfj3R1Ls3KX/vV73jYdG2OMMfsYnmiNMcaYiniiNcYYYyqSpsnLQghmbi6lRB2Mmpi6uYwmdVuml7a59yjUZLnfrL+vfvWrmzI1Wl6j7PoZ81JiT7jhjOYYe+u3SqYpjkbnHCsXni7HyELpdkn72cVlJ7tm2e9quGD5i9YYY4ypiCdaY4wxpiKeaI0xxpiKpH601CrVVv3ss8+GNoZKzHxu9ZCrVq0KbWvWrAn1HTt2NOVnnnkmtO2///6hPm3atKY8a9as0DZ16tRQP+SQQ5pyF79K2uv1GvFS0o9W+5+FlhxuX/sy9qM1ZDQ62KCnyVNG48vbr6bI9tFosGN1PTONNptXuC3f1S7adLbtng4R6RHHGGOMqYgnWmOMMaYiqemY7j36Gd8ldOLy5ctD2xVXXDHsdsPtV11raHJgnaZl5aijjgr1U045pSmfddZZoY3mCr0OmSsQzyUzmbdh03E9bDo2bXQxSXbZz2hMnxk6DrWFn9UxrFY4yTbzcL+0XT99l9uOuTdlBo84xhhjTEU80RpjjDEV8URrjDHGVKRTmrxMj3z++edD/aGHHmrK1157bWhT7XLXrl2hjRqo1mmD/+lPfxrqv/jFL5pymz1edVe6Al188cWhPnfu3KZMl53MjYl9yLReYo223jEH3SXkpUBtd4rhjrE37nsWJraUkV8H/m7btm1NefPmzaGN573ffvs15YMPPji0jR8/PtT7DRvbpj2rbsy1P6xnaVQ15C37tydck0Z6HH/RGmOMMRXxRGuMMcZUJDUdMyOPmk3V7aaUUq666qpQV5PwY489FtrUnEJTAKM96bY0w7DrajrmEndt475ozqEJePbs2U35t37rt0LbhAkTevaPx+ySUcim43rHtOl47zNWbi21GE1Epz1xzCeeeCLUn3rqqabMqHMclzg2ZkyfPr0p61hHOFds3bo11DWSYNs42eXZOOigg5qyRgYsZejc0q+rkk3HxhhjzD6GJ1pjjDGmIp5ojTHGmIp0cu9R2/43vvGN0Hb33XeHehYaS/dD3ZIa7XPPPdezjajtnPulG5Ha76k3Z5qBLo0vpZR3vOMdTXnx4sWhjbqF/vbRRx8NbVzGPm/evGGPPxx63tRf9D5Qi+Z5Zttm4S+7aBpdQr5l4Tm5n+xcyCBogKZ/BkHPzbLPcPygu4pCt762Me1XrF27NtSffPLJUGd2MiULTcj3huOHaqu6VqWUOIZu2bIltPH9y97dTLPlthxT9VrzmOqWWUp0XRpN9h5rtMYYY8yA4YnWGGOMqYgnWmOMMaYiqUZL7eHpp59uymq7L6WUAw44INQ3bdrUlKnfrlmzpinT32rcuHGhrjoA7fP0F1M9lzoJQydq+Ea2sa66Bc9T7fUnn3xyaLvwwgtDXa/n2972ttB26KGHhvqHP/zhprxy5crQRr1ZbyFDqum5nHDCCaFtwYIFoa4aB8Nb0idNz7stzJxqLtSiM19o6ux6H/gsUOsaBF3P9GY04fwyxiokI7fdsWNHU2aIw8wvlfvhu6H1ww47rOd+7rrrrlCfMmVKqKs/aRc/VbZxPN65c2dTnjlzZs9jtoWf7XX84fqbtWXrhjgm8L5o/3kutUN5+ovWGGOMqYgnWmOMMaYir8gaaQ6YOHFiU2aWB5oEJ0+e3JSPPfbY0Kamz5tvvjm0rVixItTVZENTAE3b2q5m5FKGmmbVpKom8VKGmknVvJJlF/rRj34U2u6///5QV7Mzzencr15PmsHVZFNKNPfQhKrn8oMf/CC0MWSZmpZpBtdQbKSLeYzwnqqpiM+U3rO2LCg2He+7tN3Lfu9fZpLkfvk8URp75JFHmnI2BpQSn9PM3a6UGDoxk2s4BmTZv9qun45nHEPXr18f6iovHXLIIaFN54cu2Xu6hNJlVjheI90vJT/2adWqVU2ZY/ykSZN69qeLlNHr2fQXrTHGGFMRT7TGGGNMRTzRGmOMMRVJNVou/VbatAe1w3Nb1Xff/va3hzZqq8uXL2/KP/7xj0Pb0qVLQ121Vuqu1FxUX6BLjC5pLyW6EWUaAdm4cWOoH3300U2ZOuu5554b6u9617t67pd6jYZjy7QRdasqZehy+G3btjVlhnzjearmTb0j05N4zCx1YHYubSEircPuu7RpZFrP9Mgu4T4J3/MsBCOfd302eQxuq2sqmPpOx1+OUdk1aUP7R1clug3pWhuumchc/Pieazuvye7du0Ndx27eB/Yha8vCtD7++OOhTeektv0o/bqe+YvWGGOMqYgnWmOMMaYinmiNMcaYiqQabZbqqE0j023p30T7vUL/sFNOOaUpL1q0KLRR73jsscea8g9/+MPQdscdd4S6+mcx7Bj1yGeeeaYp05dMt6WOwm01Nd7xxx8f2nhNtE6dQnWTUkpZsmRJU6bGrVoEQ1Yec8wxoa56KfUiahp6brNmzQptmuKvlHhP+SxwHUCWUi/TSqzJvngYTSrFkT4HbWEBdazhuoLMj5bPN99l3RffXR0DGJo281XP1jawT+wP3109l8wvmefJuv52+/btoS0LKcsxfjSaraYppb/w4Ycf3pQ1XgT7TrIYAWG7vrYyxhhjzIjwRGuMMcZUJDUdE/2M52c565nbi27L7bIwWm0ZZS6//PKm/J3vfCe00b1Hf8twknPnzu25rZofSokhzHgu7K+aYzW70XDbqhl8zpw5oY3uR7otQ6qpaYrmHA0rV0o0g9BcwlBo6lL0/e9/P7TRdPWWt7ylKS9evDi00SSWmegyt462kIxmsBir7D1d5IQuZmY+//peMcxpZg6mWZnmYR0LObao1ER3wMw8THMr33t9l5mNjOfWr4snrwFR10ENqztcf7Wd84GGSiwlynW81hwT1BTP+6DHzDKgkbbMZc3fe+7BGGOMMaPGE60xxhhTEU+0xhhjTEVSjTZzr6Ammy2Pz1I68RjZsnrayi+55JJQ15R71BqoBaq9nuEaGdpRl7xTz9XzpsZC3UKXtc+YMSPtn/aBy9+pTSu8RqoZUD9giEjVbto0Tr3/7A+X73/hC19oygz59uY3vznUdWk9nw19FtrcOOzuM9hs2bIl1CdMmNCUObb060JB2p5hHWt4DL5HqunxXeXaDNX7uBaD74q+23xm1QXlqKOOCm2ZOyB1Va7b0PGOY1aWtpLo+8hrren/SoljDccz3u/58+c35alTp4Y29ld/25YeMFs3pH164IEHQhv1cXVnpMbdC3/RGmOMMRXxRGuMMcZUxBOtMcYYU5FOGq1CjYAah/qS0c6vtvMshFYp0Sb/gQ98ILTdfvvtoa5htOgnRZ/Mc845pykzhCB1FL0ODN2ltn1eg4ULF4b6iSee2JSpY1LTOOmkk5oyNSGmsso0W9VneK2nTZsW6qqjMHUV77f2X0NUDtcf7cNVV13Vcz+lRN2dPnN6famh0Rcv830zex+G9FRdk+MO76W+D6rtlpKnXSTZ+MbwqQqfd+5H12Yw7CnrK1eubMr039QQqTwG4wJksQn4bmg9e1e5X2q/ehz2Z926daGu48lhhx0W2nTcLqV/3bONLHUm27L5in6/ur6AMQ4Yg+FX+IvWGGOMqYgnWmOMMaYiqX2NpgI14dCUmGWzz8zDNLfymF/5ylea8rXXXhvaaPbQZevsD002+omvZtpShi7n1nM99thjQ5uaETQUYilDswJt2LChKc+cOTO0MZuP9o8mpSxzEs1EaoahCY5mtsxVieEa16xZ05R5f2nyzVypbrvttp79vfTSS0ObnjdDQmYuZGbw4PuYZfSim5yGINXnsJQo12Tm31JyUyLfFX0fH3zwwdA2ffr0UNf3iLIPx6UzzzyzKXO8WLVqVVOmuTpzkeG50JSsshrfc5K5Vuk7uHbt2tBG6UllM459WThfngvrOl/wPDOX0y4hWjnP6LhJCcSmY2OMMWYv4InWGGOMqYgnWmOMMaYiqUbbJT1Vl3RVaiunJkvt4bLLLuu5baYLZyHUSok6C3VX6n26L+539uzZTVnDB5YyVN/Q/XI/WWgxag9ZiLVsKT/hNVFXG7rdMJScpsmjSwC1MdWp2He6Nd1yyy1Nme5RZ5xxRlOmbsJr1OY2ZvYuWcq6trFF3yNqgT/5yU+a8tFHHx3a+Eyo1pa5yJUSw7DefffdoY3PfzYWUptW/ZluLTp+8H1kf7M1CuyfvnPZNWGd90zHBE2DV0opp512WqirSyLfVY6T2l9q9xw/dC7JwkXyt5n23BZeWOu8n73wF60xxhhTEU+0xhhjTEU80RpjjDEVGXEIxrb0dtpOG3emn33jG98I9SeeeKIpU4fgfjSFHf3OaOu/7777mjL9PqnZqt8XfWw1HR/bqANkmjf1UtU81G9wuG313KjHZD6s3I+GUeN+mNZMt21LraXHpV8htSe9Rt/61rdC28knn9yUmQZxpKnUzN6Bz4jevyxVZinxeaJGq2183hm6VH/L8WzBggWhrj6uXK/A9RUa2pRrTrK1BDxv1WHVp7aUoWOhaqkcF/mer169uilTY6RGq2tJeB/Ud5ZjJn3/M99d9k/HAGqyrGuf2EZ0Wz43eq15DbIxv188OhljjDEV8URrjDHGVCQ1HdMEoaaNtiX4Wqe5RD/NadL95Cc/GeqanYPmYIZmy5Z3Z2bwW2+9NdR/+MMfhrqaLOfPnx/aNOMNzUQzZswIdXV7ofkhM4/RvEpTsl4HXhOt05TGbTWkGs07NMvob9tCvik0SdNMo2YkhnVbunRpU16yZEloa1vabwaLzDWPpkQ+7/p8bd68ObTpO8h3jHU9Dt1TGBpUpRKOWQxPqqZPmlD5zmWmYzXbavjWUoaafNW1kO8UQzs+8MADTZmZy3iNdBxQlz5C1ySOhbofhk/NwiqyjXOSjifcL9FnjvvRZ4pjFMdmNSX3m2nIX7TGGGNMRTzRGmOMMRXxRGuMMcZUJNVou2ieWUi1zJ1H9YJShmou6rLDsIXcr2qZDFFGXUU1A9rkqUd+73vfG7bM/lF/4THVvSDTjInqQ6UM1bD0OnC/mV7KNq3zGmT3sMu5cFk9NSHV4Xm/Nbze4sWLQ1v2/JnBR59phuXkvVXNbMeOHT23ZShQ7ke1NupwdOHR54n6qLrLlBLfFXX1KWWofpqlE9Wxhb+ju8/ZZ5/dlKkbUkN++OGHm/IFF1wQ2rg2Q9d1PPTQQ6HtuOOOa8ptY5/qp21uj1n6QqLHoctT9txwfshCgLIPeq6cO3rhL1pjjDGmIp5ojTHGmIp0yt6TRXvKMuvQVKDb/td//VdoY5Qm/dynKZGf//pJn2VcKCU3UbK/2ie6GKlbAM0urKubAqPU0AShdZqKMxcBngu37fW7NrIl720ygl5PHpPXWk1MdGlYs2ZNU+Y1acu+YgYLSgjqDsLnh8+wmjNpJlVzMMcSos+MmmlLKWXy5MmhrqZamnHpwqOmZLrUsb/aRx5TTdR0K+S4ec455zRlvo/r168PdXU7ZMYxjh8amW/58uWhTaNB0YTP/eg1yzIElRLvf9t4oefKa005IBsj9Dg8BuuZS1Yv/EVrjDHGVMQTrTHGGFMRT7TGGGNMRVKNNqMtI09mu1Y3kjvvvDO0UQvU/dIdhduqJkr7fNY/6gkM+aZ6EpeQ62/b3IRUg2kLM6fL1nktuXRer0umnWfZg9jOY2S/bQvHmWVmob6lWg91bHX9Yli5ww47LO2DGSyyrE18j6i9qbsb3/NDDz20KTMMIJ81vve99lNKdDlqy6Sj49Bjjz0W2vj863ObacqaQayUoZqojlHUFPmuLFy4sClTH2WWLnWpO/LII3v2neMZyUIVZmEWOebzfuuz0iWLGM9b+0ctl/dX+8C2XviL1hhjjKmIJ1pjjDGmIp5ojTHGmIqkGm0WiqpLmrwsjRq1Gvqe0pbeaz+Ex6TGqHopU2QxLGCWwivzYaXOSR9chWEWFWoGmWabXZPsWvK3bSEN+/WT5nF5/agJ6f3ns6DXlym7qNGawYbPk/rKPvroo6GN/tT6PtDHVp81rsugnqb7YX8YZlHTyWkIw1KGjh/6/HO/1I2/9a1vNWW+12eddVZTHjduXGg7+uijQ101ZB6T772OffRnfvzxx0Nd39e3vvWtoU2vUTZ+lZKv2eF4oRpo5pNfShwTeH+5X73f7I+OzdSBee2zNSe98BetMcYYUxFPtMYYY0xFOpmOs4w8WchDbquuLG0m3iwEI5eMq5mDZgSaVHXJexe3F5pxaWbI0P5lbkylxOvCa5SFu+S2GlqOpuvMzNzmHqPXhPuheUdN2zTL8ziacYXmuyzbSltGkOyYZs9DU+Ott97alD//+c+Htrlz54b6CSec0JT53uiz1mZCVXeatlB/OtbQDYfZe/S5pXzE8UPdiK6//vrQpubqOXPmhDaGZFy7dm1T5vhAVyUdG9etWxfaeC56rRmuUU36HHd4/XS86yI7tmUF0m3bxgSdPzi2qEzFc+FYrb/lMXrhL1pjjDGmIp5ojTHGmIp4ojXGGGMq0ikEo9rDqQNQE1N7OG3emgqNNveMNj1Uj9MWjitro00+c2vS47TpzbxmGbovnjf3ozoBr6fq4V2O3yWFHs8z05SpnU+fPj3UVYviflQn6/LcmMGDYQxV7zv55JNDG13CdK0BwzPqu8vUd1zjofulKxm1VNV3qY+uXLky1PWd4zFZ13UH1CM1/CHXNlAnVhcjHV9LydeV8NpyjFC3ObpZaRhIHoPXU8cTHoPnptuyfxzX9XrSDScLW8lz0f62uQnpuDRhwoTSD/6iNcYYYyriidYYY4ypiCdaY4wxpiKpRpv5RmV+s4RtIw3118WflLb8zP+KZPpfpl3Spyo7lzY/2sw/K9sv2/QadfE1Jdn9bruHqn9QN6OP39SpU5sy9RndT1s4STPYUI9ULZB6H8Ntrl+/vilPmzYttOl7r2EJSxmqBTL8oMKUeqql8t2dMWNGqN97773D9rWUoee9aNGipnzEEUeENvVxZcpIou82dWC+czqO0p+U+rPq4QyXq1om33nqnDqmctzOQtMSjmHZ2ML7rWML14YsW7asKVO/pfar14HpTXvhL1pjjDGmIp5ojTHGmIp0cu9RaCpoy+CiZEupaVrJyFyKMrMyj0szc2YK7ZLFhuZg7VOb609mGs1clWg20j61ufdkYRW7hONkXcMq0ryTmZiyEGo0pfGaOMziYENz4YoVK5oynz26q6h7yvHHHx/aNEzg5s2bQxtdgXS/NAczy868efOasoZGLKWU0047LdTf9773NWWGmqQZXLNX3X333aUXNHPT5KvvFd0Bs/GM4wXfXb1PvC9qmuUxeH/12vO9zsZN9i/L7sbnhFmV9B5TGliyZElTZkaxTZs2hbpe3yyMreIvWmOMMaYinmiNMcaYiniiNcYYYyqSarSZ7trmjqK2a7apbZ8aXbbUm243bbZ+hS4D2gdqN1koxS4uRZn9vk3T1utLvTHrHzWNLCxlW7qqrE2P06ZNdzkX7WMWmm3y5MmhjdeP18EMFvfff3+oL1++vCmrpl/K0Hur7dRW9T3negCOLfytct5554X6ggULmjLTc/J5z94j3U8p8Xk/55xzQpumvrvnnntCm2rapUT3oyx8ZClxHOU1efzxx0Ndz3X27NmlFxxburgrdoHvdeZmSFQvp56rv2VfGe5Sz42uP73wF60xxhhTEU+0xhhjTEU80RpjjDEVSYUsag1d7OGZ76lqCNTa6PuU+YxmvrxtvmSqz2THKCXa5KlFqMbctp9+UwcOdxwl0zmzNH5tvs+ZjtKWCi9r03uR+USWEq81tSbVi1STKsWa7L4GtUDqYAqfJ/VjZfhDDY/HZ4vPno4BfH7o/6p+tdSMuc4kC4eY+bTSt/PEE09sykcddVRoe+yxx0L92muvbcoMlZiFb+Q4Qx9SDY3J+5ClGeQ1UZ2Y+m2WUo9kY1imu7JP1K11Wx6D/dFxKXtuQz/72soYY4wxI8ITrTHGGFORTtl7sk96fqZnZkgNH3bqqaeGNoYoU9ebLsvo21xZ1GzEvtLskZnMdVtmm8lCCNKsRfS3bWG+tA/su/6WZqzM3Mp7n7kUZW2lRHcChtjkNVKTDs076l7AEHSZzGEGD2ZIycLu8fk64YQTmjJd/rTOZ4AmVH2eKGHR9UdN3QzBSNOnyhptrm/Z2KLnwnfh2GOPDXXt0+233x7abrzxxlDXjFnMLnTSSSeF+pvf/Oaefdf7QnM6Te+aSYnjGa+f1jPZsZT4rLSFa8yyf+m5ZWZvtnPbXviL1hhjjKmIJ1pjjDGmIp5ojTHGmIqkGi1t6Zmml2liWXiuU045JbTddtttoa5aJvW9TEfMUuiRtvRxmRah50a9KEvdRj03W1LOY2auNdQwstR8GW1p5vR6tm2bXV/2V7VX6rDz589vylnIz1Ly9QRm76PaXykx3CA58sgjQ/2MM85oyhMmTAht6gLGEIx0n8nSvC1btizU1Z2GrmUMFarvNtsYvrFf3ZDvEOvapze96U2hjdfvS1/6UlO+8847Q9uXv/zlUNfxl2OWhsLk9Zs+fXqoL1y4sCnzXWUKQD0m3+NML21zG9LfUlvlPVWyFKvWaI0xxpgBwBOtMcYYU5HUdJxlYSFZhB9+wuvS+de+9rWh7eqrr+65H0aToTlY+0ATA5ebs09KZqqlKUjPhcekqVu35bVsMw0pNLdmplvtE/tDE45uy+Nn97AtE5GaZZiZJTPDTZs2LbTNmTOnKWdmeTP4vO51rwt1NQcz2g6fA31m+D5qPXvHS8mjAc2bNy/UV69e3ZTvu+++0EaXGB2XaG7lOKTyCN9r7T/Phf3Vd47vI03HH/jAB5rykiVLQhvN7epuOXfu3NCm90kjZ5VSysaNG0Ndx2a6UjErkN4XjlmZBMjrl7kOcj8rV65syhoNq5Sh0cf67U/oS19bGWOMMWZEeKI1xhhjKuKJ1hhjjKlIp5QnmfsMNYNMM9Ml0RMnTgxt55xzTqj/4Ac/aMrUN6jTqS7QFrpL+8Bwa9xv5iLTxY1E3X+oLWWuN20aqNYz158sJGQpeTjJLAxkW8g81WfYB2pCel2olag2R1cqZ+/Zt+C9Vb2Pbl1EtTa+R/o+dnE74zvF8UKz59xzzz2h7Zlnngn10047rSlzDQLfqyyDkK5n4NqGLutnqHNqKMpzzz03tC1dujTUVU/lu6r9pY5JXf3ee+9tygyzy7FFz7VtnNSxhfeb7lzaX97fW265pSkzvOU73/nOnn3IQg0r/qI1xhhjKuKJ1hhjjKmIJ1pjjDGmIqmwRc0g8+3sEuJQ7dr0DzvvvPNC/ZFHHmnK1DcYti3TKbKQfV3Ok22qF1Gb5H5VV2w7pp5LW3rAzKdV69Rq6Peb9b1Le+YjPG7cuNDG1GUaUo9aiWoj7Hv2jJnBQ32iS4nrA9rWf+jzlaVo5H66hGzlGgDV++iX+vDDD4f63Xff3ZQXLVoU2qi1ah8Yi0B9U/nucj+qZTIsIN9VPVf6vzJWwfbt25sy39XjjjuuKTMUJvug1+Ghhx4KbVu3bg11vb9ZWjxuyzU8vJ6aHpD7ffe7392UmVaQ116fhX7D2no0MsYYYyriidYYY4ypSCefCDXhtLnz6Cd1Zt6hKYDLxDW8GZfRq1mD7W1mUjUztIU30z5yKbqaKNtMx7pf7idzKeoSXjAzl7VlvOnV11KGmoPVtJbtp5R4fWfOnBnauHz/6KOPbso0Rz3//PNNmaYfh2Tct+DzlD2nfKb13aB5UO97W1jTLs+IPqd0SeQzrZLW97///dBGk7mGeswywbCvNJPq+8j9ZK6NvCYMh6jZhijVMYnSmgAAIABJREFU6TFPPfXU0EbXGh0DNJNPKaXcfvvtob5mzZqmTDcwjtXaB57n008/Herr1q1ryswupNB965JLLgl1mu37wV+0xhhjTEU80RpjjDEV8URrjDHGVCTVaKm9jVT3ylxBqK1Rn3njG9/YlNXGPlz/VMPbtm1baMts+2196De8X1t4Rq1nKera4G8ztybdtk3XpN6l0N1BydyhSokuPbrEvpShYd2OP/74nsfU+9B2ra3RDjYMe6rw3rKuz0H27PGdytZQZJpxKXFs4VoRbqsaLl0Sb7755lCfP39+U377298e2vS9absm+p7v3LkztPHcsnSiHCNOOOGEpsz1M+qmc9ddd4U2pj/V91F131KG6rsPPPDAsL8rJd6HUuK94PjFZ0y3VbfRUkq58sorm/LFF18c2qg3j8R10F+0xhhjTEU80RpjjDEV8URrjDHGVCQVHzPdK/Nt47aZhsj90P6t6bPe8IY3hLbrrrsu1Hfs2NGU6UdLXUX94uiTlkHNQLWRtmuimlCbn2oXMj1Sryc1Tx4z0x6oC2fnwuugvoJMG3bMMceEumpGWVjFrD/D9ckMFnw/9f5R88zejSy8II/BZ0I1vSz8YSml7N69uylzLKG/5qZNm5oytcDTTz891DVNHZ933Q/1Rm6rWjB1zCycJH2CuZZF9zt16tTQdsYZZzRlTYNXStRZSyll8eLFTZnjA9dpzJ07tykvX768ZOiz0jaG6rOi6f9KKeW3f/u3m7KeVylD0zbqte93/Y6/aI0xxpiKeKI1xhhjKpJ+92bmt7YQjEqX8HhsU/PPKaecEtqefPLJUFeTCU0rK1euDHU1UbaZIbWd5lU9TlvouOy82d8sTGXmapORZQgqJfafy/xphlO4XJ/mKA25xpBqzISi1zpzuchMhqXYdDzoqCm2lDxbFccPJZOe2vajzxrNwXSReeqpp5ry5s2be7aVEiWsCy+8MLSpqbiUmLmGIQ579bWU3D2K0BysshmvCfer15Djh5pU1TRcSil33HFHqGtYRXVpKmWo2V7HD44PKiWWEsMh8jzpuqShJzlWZ+EvM/fPfscZf9EaY4wxFfFEa4wxxlTEE60xxhhTkVSjpRaotmnauKmHZGnyFNq4eUzVE6gRnH322aGu2gm3pQaj2ghdTrLwZpkLTJc0dNQ1ua3qFtQesrBzWbg66q50a9I+8JpkWg5DlB1xxBGhrmEXmSIr0/qzMJVZOsDh9ttvm9kz8NnT8YP3ku4feu+zlJZ8H+n2sn79+mHLpQx99tS1ZcmSJaEtC22qaR9LybVfaoo6ZlE7zcZU7idLF8jrl41hHOPVrYnj2Yknnhjq6u5DNyENNVlKHPO5n+zc2Pcu73mXUJ7WaI0xxpgBwxOtMcYYUxFPtMYYY0xFOoVgVNrSNimZHbst3Zm2U7+ljnjRRRc15c9+9rOh7fDDDw911Yja0l6pTZ46j/a3LcShtlNzoQ6gv2WqrYxM86QvLM9T/eJUwy4l6ialRJ2FvrHUYVW/4bYZvCZ67dkfs29BrVKfcd7bLFwp3zl9r/heZ2k1FyxYENqoI+o6iTa/dtWU2YeNGzeGup43wz528R/WPvG9YX91/QevCbXWzL9U90vNnbq1Xt/Vq1eHNk3FV0q8LxxDOeZnqTNJtsZIn7HRxEPoeezOvzDGGGNM33iiNcYYYyrSX+qBPUi2FJ1mDH7iq8nm3e9+d2j727/921DX0F1k+/btPY9DU4aaqtj3zLxJE02XrEDcVk1MWSYdmrhomtL+06WIJhttX7RoUWhjaMz3vOc9PffLPmTmn5FmcTGDx/Tp00Ndnz1KHKzrvaWZVEP0HX/88aGNGVv0WaQklGXH4bPFcJJaz7KalRLfySz0a5tUp/vh9ZoxY0ao63jG/tEErO3cVq89+85t9b5wfH3iiSdCXWU+3heapHn/M/o1+XYJL9wv/qI1xhhjKuKJ1hhjjKmIJ1pjjDGmIgOn0WY2d7rEUO/TZetz5swJbb//+78f6h//+MeH/d1wddUFqHOqHkL9gLZ91XmydFT8bRb2sZSo13C/WUjDLCUhNSDq4xqG7rHHHgttqsmWEvW4zD2Kfco0oS4hF83gcdppp4V69rxnYQL5/Og7mLkKcj/Ub/ns6VijoQdLKeW+++4L9WnTpjXl2bNnhza+gzqeUOvV8YKp+FhX3ZM6K92GNA1dm7aq72u27oHvYzamMo3mqlWrQl1DMnLbbAxjH7I1HUT3Y43WGGOM2cfwRGuMMcZUxBOtMcYYU5GX/TIxvGchwPYUmc9XFr6RGgE1xu9973tN+Q/+4A9CG3VEZdu2bT33S22Jeqmmk2vTGPXc2lISKln4SF6vLFUZ/Xx5TVR3/cQnPhHaFi9eHOp6brwvGvaR/eW5aJ/angVqT72OYfYODMGYwfvV77jUxUeb8H1cu3ZtU16xYkVoYyq8uXPn9jwm61l6Nn1X+K7SF1XHJeq3HLP0PZ8yZUrP45cSNWReE40TwDGJa20yX3/2d8uWLU359a9/fWijlq6hMts0+SyFYqb7d9Fse7V5xDHGGGMq4onWGGOMqcjAmY4zky8/y2lOUdMA98Ol82oqWL58eWj7m7/5m1B/8MEHmzJNPWpaoTmC5tfMHJyZlLLsJaXE+8Tzzpat0z1K3QDoxnT++eeH+l/91V81ZYZn1FCYpcRzZVhKmpiya6T3t23pvk3Hgw1dZPS+ZyFGu5DJMcPVMzZs2NCUJ0yYENrUHaWUPCxr5vbC0ImZq1tmJuW2zCC0bt26prxp06bQloVazUzxlOYyc2tbxrbHH3+8KXPcZrjX008/vSlzvCV6nMx03GYqzsYem46NMcaYvYAnWmOMMaYinmiNMcaYigxcCMYs3Rk1Weod6jJAHYBhyDRsoIY6K6WUM888M9RVk3nooYd69o/6RpZqq0vIMoYdy/TdbEk79VHqCeqyM3/+/NDGUHJLly5tygsXLgxtPDcNo8b+8dprCD3qPqobUyfJNFkz+GRuaJkeORqXnSzsI/c7b968psz3kdqqttPFj7/Vd4VrJrI1CXzeM92Q5zJr1qymzJSWK1euDHV1DaIWrZpomx6u725beFem9VP+/d//PdR17GHqU15rJXsWspCfI8VftMYYY0xFPNEaY4wxFRk49x6ipkWaRzLTACOn0J1Az02Xk5dSysaNG0NdzSfcr5pEaJ7Ooj+1LcHfsWNHU6ZpiqYMNb1wibuae+h2o5GqSokmG7owaASWUmKGEh6TJjA1WbON55Jtq9ezzZyTRW+xe8/eh5KB3lu+53xXsuhhvbZr27YNHSazDDylDJWBMvTdbYtIlJGZPkl2HTgOrVmzpinTrKzjHU3Z7EMWAYvR4bSuxy+llI997GOhfsMNNzTlX/u1XwttHDczU3w2JrRlk+qnzSOOMcYYUxFPtMYYY0xFPNEaY4wxFRk4nwja8tVeT5s7tUHVRpiVQjXFUmJor0MPPTS08Ti6X+ovqndQq6FtX92T2JbpPrT7Zy5QRN1lMs2zlHg96QrEY2roRB6fbk4Kz4VL+3VfvJ7sv8Lnpou+ZfY81Mi0zucpC9NJslB/mQtYWzatLCRqtmaiTbvMQv91oYubU6Zx04Xn+OOPb8oLFiwIbbpehRl4GDoxG0Op12u4y69+9auhjWtHdNxscwvrd21GF022X/xFa4wxxlTEE60xxhhTEU+0xhhjTEUGzo82s6O36RBddAol8/EaDSPtTy1G4z86Gp1iNPe0Bvaj3ftQf8/okt4u02i7wN9mWmDWh9H4e+txshR1/fSp17ZtoRP1uF305ay/bOOaGA2lu3Xr1tDGWAAzZ85syroehf1jvUvbSP2Zw/773oMxxhhjOuOJ1hhjjKlIajoeNGqZjkd73JcCozEd92se21PYdLz36SLX7AkJq82Euiee0y7HzLZtu14jNR2P1X1oM812kZoyM31W5zGz7Ec2HRtjjDEDjidaY4wxpiKeaI0xxpiK7FMa7YuJ0ejNbBsr9xlNJfjCCy+Etiy9HcM1ZnpHlvKMZGHweAzqR132a4wZHW3vdb8uWW1tmdvQBz/4wdB24403hrqGl2Qox7/7u79ryosXLw5tXdZ0WKM1xhhj9gKeaI0xxpiKDFz2npcqNK1ohgtmuMlMLTT5aoaNLVu2hLZbb721Zx8YZYUZNjQ7EvvO3x511FFNWbOBDPfbfs26NFXRlDxWEYKMMe20vbcjlWu6SGzM2PaOd7wj1FesWNGUN27cGNq+/OUvN+UlS5aEtueffz7UmTWuH/xFa4wxxlTEE60xxhhTEZuO9xJtq2RpLlZoNlVz8Q033BDali1b1pQZvHvSpEk9+9SWNHvt2rXD/q6UoYnfly5d2pSvvvrq0PaGN7wh1OfOnduUp02b1rMPWQSbUqIp2auMjanLWL5jXTwulNe97nWhzoQEhxxySFN+5JFHQhu9Kvpt6xd/0RpjjDEV8URrjDHGVMQTrTHGGFMRa7R7ibbIKereQ42A21522WVNefPmzaFNdc5XvvKVoY31Aw88sCmrW1AppezYsaPnb7ncXZNkc79su/nmm0NdtdZFixaFtje96U0998Nz0WvUJZKWMaY7bW44Y+Xekx3n4IMPDnWOhbp2RPXaUqJLYtv6mZFEufKIY4wxxlTEE60xxhhTEU+0xhhjTEWs0e4lqAPQtq+6wLp160LbFVdcEerqR0vfU/WdpW8uQ4vpfn72s5+FNma72LVrV1Nm+EOGa9y5c2fP/fC3Wr/pppt69u9tb3tb6ZcumX2MMXVp011H+n5ybCG6joPH0DUxHItHGiZW8YhjjDHGVMQTrTHGGFMRm473EpnpopRouv3P//zP0KZm21LiUnWabdXcSjMtQzlqHzRJMvdTSinPPPNMU549e3Zo0wTybH/22WfT/WbZOB588MGmfNhhh4W20047rfQiS0RvjBl72syvGbptl/0ceuihoc7xTd0Q6TKp43GbqdimY2OMMWbA8ERrjDHGVMQTrTHGGFMRa7R7CeoAdL257bbbmjLT22m4sFKim86sWbNCm7q20GWHYQt126eeeiq0jR8/PtSPOuqopvzkk0+GNp6L6rLURqgpa2jFLHXgf//3f6f7OfXUU5uyhoA0xuxd2jTOsUq5x5CMOv4x7aem1MtcLUtxCEZjjDFm4PBEa4wxxlTEpuO9RNuy9Yceeqgpty1b130xCpKaanfv3h3a6Ca0//77N2Waipm9RzNhTJo0KbTRRK37ZVYgZv5RaLJRUzfNwddff32oa5/UzF3KULORMWZs6eLO08WFJzPrcuzLpDG6/Kn8tnHjxtA2Y8aMtA/94C9aY4wxpiKeaI0xxpiKeKI1xhhjKjJwYhXt8+ruQbt6ZitvC6NFnVNhOETVObss9c6yUuh5Ddcfbae7DHVN1U8Z/lD1UWa32LBhQ6jrebN/dClavXp1U6aeS1SzpbZKN6KJEyf27IPWGSLyoosuCvXjjjsu7ZMxZuxo02RVHx3LNRI6bmpY2FKGhmnVsYZjqI6NK1asCG3Tp0/veXyed69wr/6iNcYYYyriidYYY4ypiCdaY4wxpiIDp9FSq1SbN3VW+k2pBtrmm6XtWTqlUqLuyTY9JlPAUY/U42jIr1KG6sJ/8id/0pS/8IUvhDbqmuoPS/1Dz3Pnzp2hjaEdNS2dpt4rpZQTTzwx1PV6qs9vKaVs2bIl1FWj5X1RTbaUqAXTD077cMwxx4Q29es1xuxdstRybXqujpMcz7I1POvXrw9tXJOSpcLTcefee+8NbWeddVao65zENTu98BetMcYYUxFPtMYYY0xFBt503O+neSm5eYL7VbMzTZT8rZqL2R81i9IU+/DDD4e6mjZOOeWU0DZnzpxQf+GFF5rypZdeGtrownPTTTc1ZYZDPPLII5syQzlyP2rqbjPZ6PXUTDmllPLggw+GuoZA5DWii5Fe65kzZ4Y2vfa8n11CvhljxpY2WU/NrZlrZSn5e87jUILLtuXYo6j0RGnu/vvvD/VFixY1Zbogci75Ff6iNcYYYyriidYYY4ypiCdaY4wxpiIv++WAiVu07WdQL1V7fra8vJSo6TGtG+3s+ltuq1rmRz/60dB22223hbra9v/xH/8xtKkmW0q0/fM8eY1UW+W26jbE88rOhWSaSy3awmgq7J9T4Rmz5+gSgrHtt5mb5ubNm0N93bp1TZnuPNzvI4880pQfffTR0KbhG+kGyfCzH/7wh5sy3SAdgtEYY4zZC3iiNcYYYyriidYYY4ypyMBptIQ6opLphPS/YnhEtcM/99xzoW379u09f8vL9cQTT/Tsw7/927+FuvqxnnHGGaGNYQyV+fPnhzq1VvWHPf3000PbmWee2ZTZd/qwqhbB8JHTpk0LdabuUzLNhaEmSbZfvadtGnGm5xpjxpa2tTXanq2tKSW+2/STZXhX1WXZh+effz7Uly5d2pSp9apGS39b9m/BggVN+eKLLw5t1HN/hb9ojTHGmIp4ojXGGGMqMnA+EAxppSbANnOhhs7atGlTaKMJQrelqWDbtm09+0TT56pVq5oyTQxHH310qGsoryVLloQ2NfGWUsrUqVOb8ty5c0Pb7t27Q33//fdvyjS1X3XVVU35lltuCW00e+v1ZXYchohcuHBhUz777LN79r2UaHpn9iOaeLPMHVpvC81m07Exe47sPS4lD6uYuSQyHCLdaXRMoItk5uLH/upvKTNyv2vXrm3KnFdmz5497PH8RWuMMcZUxBOtMcYYUxFPtMYYY0xFBs69J0uhxOXbK1euDHXVZWln32+//UJdl4XTpYTuM6rR7tq1q2cbl4zTTUiXpk+fPj20TZ48OdTXrFnTlJctW1YydBk7+/fkk082ZabFy0I58j7MmjUr1FWzpS7xG7/xG6F+0kknNWVqI1yCrxou+6CaS5smtCdCRBpj/n+6TCMcd7Iwi3yvOTarfspxnGPhvffeO+wxSonzBde57NixI9R1DQrX1rz+9a8vw+EvWmOMMaYinmiNMcaYiniiNcYYYyoycH60tN+rDf6OO+4IbdQuDz/88KZM3ZD2e9UC6atF7Vf1UvrYKpkPcCmlPPDAA02ZvrAMAzlp0qSmzLRN9Os66KCDmjJ9grVPBx98cGjL/FTHjRsX2hiSUftEHeWKK64IddXOzz///J59LyXef56n6q5ZmEdjzN4l02E5LlIT1bGFa1c4JmjIw7YQjArH6uwYXO+hYw914F54dDLGGGMq4onWGGOMqUhqOuanuH4yjybEnYbYotsN+dKXvtSUv/vd74a2s846K9TVfeWee+4JbXRPmTFjRlPmuXDbww47rCnTHKG/ZXhG1k899dSmrGbkUqIbTikxQwSz7Kgpu5ToRkSTiJrQea1pDqYJXaF5R/fL54TbXnfddU35iCOOCG3z5s0Ldb2e7J+em913zL5AmyuLkrnI8Hej8crsMnZnx+nSpmMCXXToaqOul9wPx7AsMxh/q+2Z1JS5dxKbjo0xxpgBwBOtMcYYUxFPtMYYY0xFUo22iy0/0xDYpnZ22tFvu+22UL/yyiubcq/wVr/iRz/6UVNmaKyJEyeGui4LZ6hE2ui1j9QI1LWFrirPPPNMqKteM3PmzNA2ZcqUUFc3HXX1KWVoKEUNETZ+/PjQpm5E7Lteg1LiUnq6CXHJu+rGvLaqf5cS3Xs+97nPhba///u/D/WRuvBQC7OGa/Y1uui3eyMNZJaKsk2L1jGVmiddHXUc4nuchV5tc+/R42b6MscZukGO5Nr7i9YYY4ypiCdaY4wxpiKeaI0xxpiKpBrtSH2oSon2ctq8tb5u3brQ9tGPfjTUVctk6jv6fZ5yyilNmb6w1CdVN6QNniEFtc7z1j61pWlSLYI6BfVc1WGZpol11TGoN2ufeF7sn4ZdpC8s+6vXnveX/q96Dxne8tvf/naoa4o9ai4ZDsFoBpGx0lL3hiZbSnyvOPbp+9nWPx17OOZzXYkek+MQ15Xofrkt5wsdw7L+UhfOQtX2e188OhljjDEV8URrjDHGVKRT9h41HbSZjjM3Df3c/8pXvhLamKlGQ/Qxc860adNCfeHChU2ZLjrcL80MCjPpqMkhM12wf9yPmq+59JxmZ3WfodmWJhE1M9PMoWHH2EYTtPaX5hPNdlRKPG8uzyfq/qMZlkop5YYbbgj1Cy+8sGcfsmfO7j1mEOniljOabUdKm+mz3zG/TebRsYXjF7OK6bvLcTwLs9gmx+m2mTmYx+C5aZ+4n174i9YYY4ypiCdaY4wxpiKeaI0xxpiKjNi9p8tybrpeqL3+O9/5TmhjaEK150+fPj20zZkzJ9RVh820ylJiSEGGTmT4QdUMqK3quWzcuDG00bavfaJ+y9+uX7++9IJ6s0K9VI/J/hxyyCGhrteE7lC8fpkGSv17y5YtTfnYY48NbQwnqdeBoRz1mAwJSY3bmH2BfrVWbtdFox2Na1C/x+E7zzFf16/QnSdbw9O21kLHNI4JHKvp3qjoNWJ/2AfVaPsdd/xFa4wxxlTEE60xxhhTkU6m4yxKCM0TWdQQdV1Rs2IpQ91INMLT/vvvH9poHtZj0jRLs6ku/aaZIzMxZBGcaAblcnM1r9C1ZunSpaGuJnPuh3XN7sP90h1J4X3R68Bj8Jpk++X11AhUjBrFrEWPPfZYU6b7lppweO8HIbuJMaTtORzpczoIz7f2gW44jDqn4wlNsazr+EFzMMcWlbQom1EC1DGM45ueS1v/1Fxs07ExxhgzAHiiNcYYYyriidYYY4ypSKrR0h6e6QKZ3kd+9KMf9dyOrja6LFxD+ZWS2++pKVKzVY2RuiZRfZcaLferUAfQbVWLLGWonpDpmtyvusRQT9D+0p2HS/JV92SbZvYZrr/ZflVb5z3jualbU6bD8nmj+1GmIRuzLzDSMItdwipy25GGgeT7xww9kydP7rt/2s5tOX5onW6P7JPSJTMYdVgdzzh+9cJftMYYY0xFPNEaY4wxFfFEa4wxxlSkk0arNnlqoNQGtZ129vvuu6/nfqg/6n4Zuovbqh9rW8gy1X7pu0uNUTVc2v21jedJzUD9vLZu3RraqPWqnsq+U6fQcImZ5p2lBuS2hD7Mei6ZHl9KvMf0m549e3ao05+41354TIaMNGYQaNNZ+/WHHSstta0tO062H45nXNOh+217V/U9z9LilRLjDTDkInXYbJzSeYbbsa6abb9rQfxFa4wxxlTEE60xxhhTkdR0nGVkaDMXZqYMXfrNT2+aRXU/NDvqkvFS8uw4NFFn/aPrirrPMNSYmqAZAozHyEzQWXYhmm1petFzzUz4be5aekzuh+em14HXNpMVnnrqqdA2d+7cUOc97rUfmoXaMm4YMwjwndN65nLSZoLOQt7yfWwbu3sdl2OUjgmU8ThGZaEKu7jacFzX8YSul9yvXge6Dupcx9+x71qnibwX/qI1xhhjKuKJ1hhjjKmIJ1pjjDGmIqlGm4XAox2buoD+lvrC2rVrm3Kbm5DqANROt2/fHuqaLo5959LvLG0TtQg9tywVE/UDaiH6W/aPqGbQpjdm7j3ad15r3sPsmrTpPkoXfZTPhvaJuo/qwpnrmTH7CjXCLLaNzV36o+MQ18/omhO6SGbjBd/dbBzK0pKWEkPVcn7IUn1mrj/Ul7mtzg90Oe2Fv2iNMcaYiniiNcYYYyriidYYY4ypSC4UJnTRAdj25JNPNmXauLnfLA0dU81pGj1qoNQF1H+XfaCtX/2mqN+qhsG+clsNydgWVlH7S39m1lWLoJ9vFu4s8y3ONPc2uK3eU/qk8TroeXfxK+yiQxmzp+jyXGbhDzP/27Zts+O0+djq2hYdt0uJ/v08ZrbGg+MX+6DjJsdUhrVVjbYtnG+/15PjF8dQPY79aI0xxpgBwBOtMcYYU5HUHpgtPaeJcqQhDvl5n2VO4NJumiDUjNAWZkzNAwxxyN9OmTKlKdPEqy47dP3h0nSakjP0+ra51qiZJrvubW43er9pPuH91m15D4man3itMzeh7JhjlRXFmL1JvyEYRyPVkczVhmPW5s2bmzJlHz0O38fsvaY0x211XOeYryFvS4nZe7qMCRxbdGxmm2ZSK6WUgw8+uOe2vfAXrTHGGFMRT7TGGGNMRTzRGmOMMRUZsXtPmz08W0p90EEHNeUsvFUp0Q2HS7sZ9ov6qUIdQHUCpuqj5qg6AHVOTdNEzZhL0/U41Eb0GOwvj5mlbcrCkLWdZ5d0VZkOlKXNo0ZLDUbbu6TzsiZr9gW6aKsjddnh2Jy5wjGMra5zKSWu1cjCFralytSxJnNPLCWOm0yrSY1W99UWlrXfc+H4yvUqM2fObMr9hpv1F60xxhhTEU+0xhhjTEU80RpjjDEVSTVa6gJZGrXMVk17+Jw5c5rypk2b0g6q7ZwabOZ/RXs97e4Z1AxUG6bOqZoorwH7QG0ia9N9sT9ZuDPqEtqWpT0sJV5PauWZryzvA0OW8ZopGzZsCPUZM2b03DbTkK3RmkGki16arXtpe771t216qY5nbak99Z3j+JaNZ9l6GZ4nfXe3bNnSlKnR8lx0TOOYla3x4HimYxbHK4478+bNG/b4Gf6iNcYYYyriidYYY4ypSKcQjJn5ootZ7+STT27K1113XWijyUH70CXsY1vGG/3kZ/+6mD51W14v/k5DEdKsQRPEgQce2JSzjEGl5K5A2ge2ZWYtHpMmJu0vz5NuVxrCjCabjRs3hvo555xT+qEtJJ1NyWYQ6DKGtrlMZuiYwDF069atffchC4nbNqZm+9H3lS6IaipmnefCvmfzTmYG51yiY5aOvaXEELylxBCM/d4zf9EaY4wxFfFEa4wxxlTEE60xxhhTkRGnyeuyLW3lCxcubMrXXnttaGPGeg3RR41RNc9SYmhH2uC76AnUGFWD5FJ0/S01Y2qXWeoo9iHbNtN3eZ6qh6i2MNwxVP9gf7Kl89SXGWaaZxMkAAAOTUlEQVRR7ws1F4Z802cjo81twphBpItGq9vyd3wfNYSrprYrZehYqFpl23ujx8nGC+4nW+PBsI+s62+pwXJcyvqfbUsdVtfacPxasGBBqPeb2lDxF60xxhhTEU+0xhhjTEU6RYbST29+ltN8mJkhzz///Kb8L//yL2kH9dOcbjfM/KJmSfaH7ilqTuG248ePD3U1mdD82quvwx1T90MzDM0V2k5zMM0leh0YAUtNJFxWz/PWa5JFBSslmo7Zd5pl1OS/du3a0DZhwoRQnzx5clPmNcrM1Xbn2bfoYvrPxqEu7jNt++23P23mVh0H+AxnZtzsOBzraG7V/XAM4DG03ja2ZOZhHW95DEpCGuGJWdiYuSzLWsTjZO6f3FalPI5ZOuarO2IpQ917tE+ODGWMMcYMAJ5ojTHGmIp4ojXGGGMqMmKNts02rfZxuqOozklbObM1qLsP7f5ZloXdu3en26prEG37/G12LnpN2NYlBBj1D9VOqDXw2utxeEzVVrNsFuwD+0O3K9VWqVtTI9Jz4bL/ww8/PNRVh8/6m2XxMPseI9XY255pJdOFu2TH4fuYvfdtGbOysLEaOpFuhdnzzneDx8x04ey3bNM1H9RdWdf+cwwgmS6chcvlfummqeMS15HoeHzMMceEtiwzXb/4i9YYY4ypiCdaY4wxpiKeaI0xxpiKdBK2Mj2EtnPVG6hHqq/kb/7mb4a2T3/606GuugDt6rTJqz2f9nn6kOq+Mj+z4X7ba1va8jMtZzRp3XicfjWENm1E9VzeM/q7Tpo0qSnTv5nPieru1Hne8IY3hDqP24su2pzZt8m01S5hC0eD7qtLukn6nxP1h2U6O11XwneM4Wf1vWnTaLP+ZSk4qRPruLht27bQxm2zELjsXxZ+lvdUfXkz//1S4hjGbefMmdOU6Uc7FmONRytjjDGmIp5ojTHGmIp0yt6jn9Btn/vZtvr5/773vS+0XXHFFaGurjZt4cwUmipoRshMOm3L93ttS3NOFi6szT1Fl6Jn5hzC88wy+2RuQ3TnoQuPHqctjJtCE/RrX/vatE9KZhYcTSg+s+fpElax7bcjPWa//WE9c9srJb4PHBNoHtZ3mXKXjlFZCFQeJxt3Ssmzz1AmU5fKHTt2hDYNncjz7PI+cjzTOscz7kfHSV4ThtJVV1KOb0cccURffR0p/qI1xhhjKuKJ1hhjjKmIJ1pjjDGmIqlG2682WUq7a0uvtmnTpoW297znPaF+2WWX9dwPQ/2p+wqXw1PjUC2iLcSh1tmm5033GR5T69RjurhOEf0t74O2MQxlpndQw+D1VB1Fl9iXMlQ713M955xz0m0zfSbre5uuZwabkd6/kaa6I3xXszGBmiLTYaoOy/R2fO/1Ged+snR7rOu2betndJyiGw51WO0/33O9ZlynkaWxbOufXhOOARwvDjrooJ7b0oVHx7eTTjqp9IJjcbb+qF/8RWuMMcZUxBOtMcYYUxFPtMYYY0xFOvnR9quflRLt2JlfKtsuvfTSUP/mN7/ZlLP0daVE7YF+n/QPU82Adv8sxFqbntvrGKxnGkYpUfPgfaA2rf3P7gtTElJT1v3yGET7y22zMGkXXnhhaMv87ewb++Kly30n/T4HbTq+jmd8H/meq58oww1SW9X3kWsbsnUcme9uW5hY3ZY+rdSJNeyj+sIOV89CJ/abPpTbUg/P1r1kbawzdCL7sGjRoqbMsTDzLR4L/EVrjDHGVMQTrTHGGFORTtl79DOen9c006hZhp/wuh+aHKZOnRrqv/d7v9eUP/WpT4U2mmF0X1u2bAltEydODHU1X2gmGva9lGg+oSklW+pNk6qaOdrco7Jwg5lbDo+p17rNHJy5MbG/anphf2hKO+WUU5oyr3UW5rPtGcv2YwabLvdrpK5cbeZMbae5leZhNam2Zc/ivvqFY0sWJpZjn2bIoqmYLjz6fra5NWVt+lu6DmYmZ5K58HDMonsUJUJlyZIloa6m5S6y1FiYkv1Fa4wxxlTEE60xxhhTEU+0xhhjTEVSjTbTEWnbzzRGaiVqv8/cY0qJ7j533HFHaLvzzjtDXY/DcGGqYZRSysyZM5typiGXEvUHaiOsK9l+6W6UhWZr06SyMIsahqzNJUtdEahZZGm62L+dO3eG+vvf//6e22ZuCuyvPjdt6bPMvkvbvez3XvOZ5doLfU75zJIs1Gqm4WXhD9meuSuyf3R1VB2WmmyWHjNbj0Kol2qd+8nuEV2eMvdKuuEwzaaO87/2a78W2iZPnhzqmTtSRtu42Q/+ojXGGGMq4onWGGOMqYgnWmOMMaYiqUCa2abbfDIzst9mKYn+8A//MLT95Cc/CXX1dWM4rk2bNoW6+snNmDEj7W8WnkvTNG3YsKFnWylRO+F5EvV1o26Spdhjaqhex+fvSsl1YWq/Cvv3rne9K9SZCjHrQ8Zonjmz75JpoNQ8Ve/jOoj169f33E/bmgR9X9nG32p/2Xe+K/pO0v9VU9axjb66XcI1dvFxzcK76n74HnO8yHz0+Vsdu6mzsu9vfvObex6zzY+6FzVScPqL1hhjjKmIJ1pjjDGmIp1CMO4JMpciDeVXSikXXXRRqF9++eVNWTNUlDJ0Sbnuty3DhporaMJRswJNxZn5ieZfovulyZT9VVMH3ZoUnte4ceN69o9mIh4zc0t45zvf2bMPxpSSh9dsM9VlJkuViCgXMVyfvg9tYfa0f3w3aKLW+q5du0Ib5Rt122GbvsuUmjLzdZYtjfW2MKdZGMgslC77q2MYTbwMy6r74rhz1lln9exDm9tX9lxlvx0L10F/0RpjjDEV8URrjDHGVMQTrTHGGFORgdNoqX+ojkKt8k//9E9D/a677mrKy5cvT/erdneGM6NNXlPsZfoHtYetW7eGuuoN1EupRSjUgBiWTOtcOq+6T3YMtjPUGTWY1atXN+XPfe5zaf+MIV1cL+jSobon02HqGMF3IQudmKX5LCVqlRwvqMNqO7elW466JPKYWcjATDdscz/K2rKQuFm6Tv6Oa1DUZYcuiLz3RxxxRFM+5phjevanjSwtItu0XiOcq79ojTHGmIp4ojXGGGMq4onWGGOMqcjAabSEKZQU2vo/+9nPNuXf/d3fDW3UaNUflsegJqR1ag9Z6jtqJVpv8zvL/Pr4W+0Dz1Pr1JCzsIo8xiOPPBLq//AP/9CUFyxYENrGImSZeXFDvVR9RpnSku+V6pxcd5Cl4Mzea/aHPq3ql5+lqGN/sxR1peR+qvoecbvsHWsLa6oaJMcdjh+Zn6qOH7wPU6dODXUdY+nPfOKJJ4a6hl3MQmyyT22hEzPNW7fl7+xHa4wxxgw4nmiNMcaYirzslwNm56NpJese29Tswcw+//Ef/xHqGp6Nph+aadSsRdOxmkxoJtLwamzPlvlzWx5z/Pjxoa4uDTRzaBvNLjymHof9e8tb3hLqH/7wh3vul9fPWXcMueKKK0L9mmuuacrz588PbRdccEGoa+hQPqea9YrjA99zrevvhqtn4RCz7DhtmXQyk6XWs6xmw9Uz1BzMY/JdVVc9Sk3qssMxiaibzty5c3v2p5T+3XC6bqtwnMzCUpIu17r5TedfGGOMMaZvPNEaY4wxFfFEa4wxxlRk4DTaTItoW4quS8Fpg6d7ymc+85mm/PDDD/c8Jo9LPSFbXp65DLDv1Jqy8GYMcai3kK4Qqqu0XT9NV3XaaaeFto985CM9+0c9i5rySDQN8+KGLh2aYvKOO+4IbZdcckmoL168uClz+GIaS4XPf6al8t3Q4/B5ztJhtvUh0wO1D5k2WUp8H9vcFXVdCTVZukyqHs5j6j2jrr5w4cJQ1zGrzV1Gr1Hb2NFFo82O2yUEozVaY4wxZsDwRGuMMcZUZOAiQ2VmF5pzaLJREys/74866qhQv/jii5vyN7/5zdC2cuXKnsfJMunQZMpzyaJIZVkz1LVguP2qKYht6sLD5fmHHXZYqGvWjA996EOhLYu6wvPOMhwZU0ops2fPDnV1x5s2bVpo47P3+OOPN2U+a2o2/f/au3scxYEgDMN1AhBIEHEE7sAduCa3ISBBJIT8CI6wGfrq07o8nlVpCd4ncqtnbGPhbtHV1T32vauGg6th5jHVrkDVezRll51qBSf/LL5q02q1GryOP08NJ/kKcFoeS+HT+/+XlJ0pz6hjF57fogUEAKARHS0AAI3oaAEAaPR16T3V7YwtjTVWrx6Px+f4dDqlusPhkMqXy+Vz7GlCupSjx3H8s2jsabFYpDqP2VZpTR5z0biUx611uv5ut0t1+/0+lTWlp9olIyKnLnl85ptjJfgO5/M5lXU3KP++624uEXmuQbWsqL9TVbyv2s1lKm2H/Dw+F0NjrR531fvzlD5Nu4nI77nPmfClJ/V5bjabVOdlbbP8GWl7V+0aFjEtfWaKKbv3dFzzp/hFCwBAIzpaAAAa0dECANDo62K0/4MvlehLCmrensZrI3I8yWMjz+czlfV/j8djqrter4P34DEqzW2LyEuheW6sbkm13W5TnW9tpef1GBDLKqKTNkO+faPOp4jI78rtdkt1Wr7f76nOv8Oan17FHyNy3HVsLobGYX3+gueya+y1mhfh9+efZblcfo49D9nbj/V6PXh/VfyxmgNDe1Dj6QAA0IiOFgCARgwdx/g0cK334ZNqGbfqPNXOQxF5Nx8/r6cIaLnaXchTf3yYSK8zZar82HR30nvgfPhVv7djS/Rp2VOBdDjYh6A1FS8i4v1+D/7t6/VKZQ3l+JKoTodjNawTETGbzVJ5Pp9/jj1co2U/jw8d6zXH2ouf7mLzt3MN4R2v8YsWAIBGdLQAADSiowUAoBExWgAAGvGLFgCARnS0AAA0oqMFAKARHS0AAI3oaAEAaERHCwBAoz+SeiLYnB8j/wAAAABJRU5ErkJggg==\n", + "text/plain": [ + "<Figure size 720x720 with 6 Axes>" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "m = 30 # Number of images to include in dataset\n", + "dataset = np.zeros((m,50,50), dtype=np.uint8) # create numpy array for images and fill with zeros \n", + "D = 50*50 # length of raw feature vectors \n", + "\n", + "for i in range(1, m+1):\n", + " # With convert('L') we convert the images to grayscale\n", + " try:\n", + " img = Image.open('/coursedata/fruits/%s.jpg'%(str(i))).convert('L') # Read in image from jpg file\n", + " except:\n", + " img = Image.open('../../data/fruits/%s.jpg'%(str(i))).convert('L') # Read if you are doing exercise locally\n", + " dataset[i-1] = np.array(img, dtype=np.uint8) # Convert image to numpy array with greyscale values\n", + " \n", + "# Store raw image data in matrix Z\n", + "Z = dataset.reshape(m,-1) # Reshape the 50 x 50 pixels into a long numpy array of shape (2500,1)\n", + "print(\"The shape of the datamatrix Z is\", Z.shape) \n", + "\n", + "# Display first three apple images (fruits1.jpg,fruits2.jpg,fruits3.jpg) \n", + "# and first three banana images (fruits16.jpg,fruits17.jpg,fruits18.jpg)\n", + "fig, ax = plt.subplots(3, 2, figsize=(10,10), gridspec_kw = {'wspace':0, 'hspace':0})\n", + "for i in range(3):\n", + " for j in range(2):\n", + " ax[i,j].imshow(dataset[i + (15*j)], cmap='gray')\n", + " ax[i,j].axis('off')\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "451b161856747ffddbfdcf8648b602b7", + "grade": false, + "grade_id": "cell-1d01aef88d590379", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "## Principal Component Analysis\n", + "<a id=\"Q1\"></a>\n", + "\n", + "### Basic Idea of Linear Dimensionality Reduction \n", + "\n", + "Suppose we have at our disposal a dataset \n", + "\n", + "\\begin{equation}\n", + " \\mathbf{Z} = \\begin{bmatrix}\n", + " z_1^{(1)} & z_2^{(1)} & \\ldots & z_D^{(1)} \\\\\n", + " z_1^{(2)} & z_2^{(2)} & \\ldots & z_D^{(2)} \\\\\n", + " \\vdots & \\vdots &\\ddots & \\vdots \\\\\n", + " z_1^{(m)} & z_2^{(m)} & \\ldots & z_D^{(m)}\n", + " \\end{bmatrix}\\in \\mathbb{R}^{m \\times D},\n", + "\\end{equation}\n", + "\n", + "where $m$ is the number of datapoints and $D$ the number of features. The goal of dimensionality reduction is to transform the dataset $\\mathbf{Z}$ into a lower dimensional dataset\n", + "\n", + "\\begin{equation}\n", + " \\mathbf{X} = \\begin{bmatrix}\n", + " x_1^{(1)} & x_2^{(1)} & \\ldots & x_n^{(1)} \\\\\n", + " x_1^{(2)} & x_2^{(2)} & \\ldots & x_n^{(2)} \\\\\n", + " \\vdots & \\vdots &\\ddots & \\vdots \\\\\n", + " x_1^{(m)} & x_2^{(m)} & \\ldots & x_n^{(m)}\n", + " \\end{bmatrix}\\in \\mathbb{R}^{m \\times n},\n", + "\\end{equation}\n", + "\n", + "where $n < D$. As was mentioned in the introduction, the low-dimensional representation\n", + "\n", + "- reduces the computational requirements of training an ML model on the dataset\n", + "- provides short but informative representations of the datapoints, which might improve generalization capability of an ML model\n", + "- makes it easier to visualize high-dimensional data.\n", + "\n", + "In **linear dimensionality reduction** the datapoints $\\mathbf{z}^{(i)}, i=1.\\ldots,m$ are transformed to the corresponding low-dimensional representations $\\mathbf{x}^{(i)}$ with a linear transformation \n", + "\n", + "\\begin{equation}\n", + "\\begin{aligned}\n", + " &\\mathbf{x}^{(i)} =\\mathbf{W} \\mathbf{z}_c^{(i)}= \\mathbf{W} \\big( \\mathbf{z}^{(i)} - \\overline{\\mathbf{z}}\\big),& \\overline{\\mathbf{z}} = (1/m) \\sum_{i=1}^m \\mathbf{z}^{(i)}.\n", + "\\end{aligned}\n", + "\\end{equation}\n", + "\n", + "Here, the compression matrix $\\mathbf{W} \\in \\mathbb{R}^{n \\times D}$ maps the centered feature vectors $\\mathbf{z}_c^{(i)} \\in \\mathbb{R}^D$ to their low-dimensional representations $\\mathbf{x}^{(i)} \\in \\mathbb{R}^n$. Observe that the sample mean $\\overline{\\mathbf{z}}$ is subtracted from the datapoint $\\mathbf{z}^{(i)}$ before performing the linear transformation with $\\mathbf{W}$. This results in the transformed dataset $\\mathbf{X}$ being centered at the origin.\n", + "\n", + "\n", + "### Optimal dimensionality reduction\n", + "\n", + "The central objective in linear dimensionality reduction is to find the optimal compression matrix $\\mathbf{W}$ **that results in the least amount of information loss** when transforming $\\mathbf{Z}$ into its lower-dimensional representation $\\mathbf{X}$. Let us assume that the rows of $\\mathbf{W}$ are orthogonal unit vectors.\n", + "\n", + "A natural measure of the information loss incurred by the transformation is the **reconstruction error**\n", + "\n", + "\\begin{equation}\n", + "\\begin{aligned}\n", + " \\mathcal{E}(\\mathbf{W}) &= (1/m) \\sum_{i=1}^{m} \\| \\mathbf{z}_c^{(i)} - \\hat{\\mathbf{z}}_c^{(i)} \\|^{2} \\\\ &= (1/m) \\sum_{i=1}^{m} \\| \\mathbf{z}_c^{(i)} - \\mathbf{W}^T \\mathbf{x}^{(i)} \\|^{2} \\\\ &= (1/m) \\sum_{i=1}^{m} \\| \\mathbf{z}_c^{(i)} - \\mathbf{W}^T \\mathbf{W} \\mathbf{z}_c^{(i)} \\|^{2}.\n", + "\\end{aligned}\n", + "\\end{equation}\n", + "\n", + "Here, the centralized **reconstruction** $\\hat{\\mathbf{z}}_c^{(i)} \\in \\mathbb{R}^D$ of the datapoint $\\mathbf{z}_c^{(i)}$ is obtained by the formula\n", + "\n", + "\\begin{equation}\n", + "\\hat{\\mathbf{z}}_c^{(i)} = \\mathbf{W}^T \\mathbf{x}^{(i)} = \\mathbf{W}^T \\mathbf{W}\\mathbf{z}_c^{(i)}. \n", + "\\end{equation}\n", + "\n", + "By multiplying the transformed datapoint $\\mathbf{x}^{(i)}$ by $\\mathbf{W}^T$ from the left, the transformed datapoints are defined in terms of the original features of the data (i.e. in terms of the coordinates in $\\mathbb{R}^D$). This enables us to calculate the distance between the original and compressed datapoints in the original space $\\mathbb{R}^D$, and hence also the reconstruction error. Furthermore, it can be shown that the matrix $\\mathbf{W}^T\\mathbf{W}$ is an [**orthogonal projection**](https://en.wikipedia.org/wiki/Projection_(linear_algebra)) matrix that (orthogonally) projects the datapoints $\\mathbf{z}_c^{(i)}$ onto the subspace spanned by the rows of $\\mathbf{W}$.\n", + "\n", + "Note that the reconstruction error can be given equivalently in terms of the non-centralized datapoints $\\mathbf{z}^{(i)}$ and reconstructions $\\hat{\\mathbf{z}}^{(i)}$:\n", + "\n", + "\\begin{equation}\n", + "\\begin{aligned}\n", + " \\mathcal{E}(\\mathbf{W}) &= (1/m) \\sum_{i=1}^{m} \\| \\mathbf{z}_c^{(i)} - \\hat{\\mathbf{z}}_c^{(i)} \\|^{2} \\\\ &= (1/m) \\sum_{i=1}^{m} \\| \\mathbf{z}_c^{(i)} + \\overline{\\mathbf{z}} - \\overline{\\mathbf{z}} - \\hat{\\mathbf{z}}_c^{(i)} \\|^{2} \\\\ &= (1/m) \\sum_{i=1}^{m} \\| (\\mathbf{z}_c^{(i)} + \\overline{\\mathbf{z}}) - (\\hat{\\mathbf{z}}_c^{(i)} + \\overline{\\mathbf{z}}) \\|^{2} \\\\ & = (1/m) \\sum_{i=1}^{m} \\| \\mathbf{z}^{(i)} - \\hat{\\mathbf{z}}^{(i)} \\|^{2}.\n", + "\\end{aligned}\n", + "\\end{equation}\n", + "\n", + " More descriptively speaking, the reconstruction error gives the mean squared distance between the true datapoints $\\mathbf{z}^{(i)}$ and the reconstructed datapoints $\\hat{\\mathbf{z}}^{(i)} = \\mathbf{W}^T \\mathbf{x}^{(i)} + \\overline{\\mathbf{z}}$. This is a very intuitive quantification of information loss - the less information is lost in the transformation to the lower dimenstional space, the closer the reconstruction $\\hat{\\mathbf{Z}}$ is to the true data $\\mathbf{Z}$.\n", + "\n", + "The fundamental result of **principal component analysis** (PCA) is that the reconstruction error is minimized when\n", + "\n", + "\\begin{equation}\n", + " \\mathbf{W} = \\mathbf{W}_{\\rm PCA} = \\big(\\mathbf{u}^{(1)}, \\mathbf{u}^{(2)}, \\ldots, \\mathbf{u}^{(n)} \\big)^T,\n", + "\\end{equation}\n", + "\n", + "where the rows $\\mathbf{u}^{(i)}$ are the $n$ first **principal components** of $\\mathbf{Z}$. The first principal component is chosen so that it corresponds to the direction in $\\mathbb{R}^D$ along which the data $\\mathbf{Z}$ exhibits the most variance. The subsequent components are chosen so that the $i$:th component corresponds to the direction that exhibits the most variance, **while being orthogonal** to the components $1 \\leq j < i$.\n", + "\n", + "When we use PCA to transform the datapoints in $\\mathbf{Z}$ to the lower dimensional space $\\mathbb{R}^n$ (where each datapoint has $n$ features) we say that we perform PCA with $n$ components. An important consequence of the definition of $\\mathbf{W}_{\\rm PCA}$ is that the order of the rows in $\\mathbf{W}_{\\rm PCA}$ is the same, regardless of the amount of components used in the transformation. Given two PCA matrices $\\mathbf{W}_n$ and $\\mathbf{W}_k$ with $n$ and $k < n$ components respectively, the k first components in $\\mathbf{W}_n$ will be identical to the ones in $\\mathbf{W}_k$.\n", + "\n", + "### Example: PCA in 2d with one component\n", + "\n", + "The image below visualizes principle component analysis in 2-d, and shows the datapoints along with the first - and in this case the only - principal component. When the dataset is transformed into a lower dimensional representation, the data points are transformed onto the subspace spanned by the principal component as shown by the red lines.\n", + "\n", + "<img src=\"../../../coursedata/R6_DimRed/pca.png\" alt=\"Drawing\" style=\"width: 600px;\"/>\n", + "\n", + "In this case, the transformed data is one-dimensional and each datapoint will only be characterized by its value along the axis spanned by the 1st principal component. Even though this is not shown in the image, it is easy to imagine the green line being the sole axis and the datapoints being located on that 1-d axis at the points indicated by the red lines.\n", + "\n", + "When the data is reconstructed the resulting data points $\\hat{\\mathbf{z}}^{(i)}$ will be located on the 1st principal component in 2-d space as shown in the image. As such, the red lines represent the difference between the original and reconstructed datapoints, and the reconstruction error corresponds to the mean of the squared lengths of these lines.\n", + "\n", + "Based on the image, it should be quite easy to believe that the principal component in fact minimizes the reconstruction error. If we would move or rotate the green component, the total squared distances would inevitably get larger. We can also verify that this component also corresponds to the direction of largest variance in the data. \n", + "\n", + "### Transforming entire datasets\n", + "\n", + "So far in this notebook, the transformation formulas have been of the form used to transform one datapoint at a time. In practice, we typically want to compress or reconstruct the entire dataset at once. The compression of the entire dataset $\\mathbf{Z}$ is done by the transform\n", + "\n", + "\\begin{equation}\n", + "\\mathbf{X} = \\mathbf{Z}_c\\mathbf{W}_{\\rm PCA}^T = \\big(\\mathbf{Z} - \\overline{\\mathbf{Z}}\\big) \\mathbf{W}_{\\rm PCA}^T,\n", + "\\end{equation}\n", + "\n", + "where $\\overline{\\mathbf{Z}}$ is a matrix with the sample mean vector in every row.\n", + "\n", + "In order to understand why this transform works, first recall that the individual datapoints in the matrices $\\mathbf{Z}$ and $\\mathbf{X}$ are stacked row-wise. By standard matrix multiplication we get that the $i$:th row in $\\mathbf{X}$ corresponding to the $i$:th compressed datapoint is given as\n", + "\n", + "\\begin{equation}\n", + " {\\mathbf{x}^{(i)}}^T = {\\mathbf{z}_c^{(i)}}^T \\mathbf{W}_{\\rm PCA}^T,\n", + "\\end{equation}\n", + "\n", + "which is clearly just the transposed equivalent of \n", + "\n", + "\\begin{equation}\n", + " {\\mathbf{x}^{(i)}} = \\mathbf{W}_{\\rm PCA} {\\mathbf{z}_c^{(i)}}.\n", + "\\end{equation}\n", + "\n", + "Hence we can use the matrix multiplication defined above to compress the dataset instead of compressing each individual datapoint separately. By similar arguments we find that the formula for the reconstructed data is\n", + "\n", + "\\begin{equation}\n", + " \\hat{\\mathbf{Z}} = \\mathbf{X}\\mathbf{W}_{\\rm PCA} + \\overline{\\mathbf{Z}}.\n", + "\\end{equation}\n", + "\n", + "If you are not completely convinced by this, it is a good exercise to verify that the rows of the $\\hat{\\mathbf{Z}}$ contain the reconstructed datapoints!\n", + "\n", + "**General note:** In this section we have defined $\\mathbf{W}_{\\rm PCA}$ such that the principal components are in the rows of the matrix. In contrast, some sources define the matrix such that the components are in the columns. If you consult other resources and find seemingly different transformation formulas, be sure to check the definition of the matrix in order to avoid unnecessary confusion!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "13cd2cc75df4b9b71bc9a3722817191c", + "grade": false, + "grade_id": "cell-c034306c58b538dd", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "## PCA Off-The-Shelf in Python\n", + "\n", + "<img src=\"../../../coursedata/R6_DimRed/transforms.png\" alt=\"transforms\" />\n", + "\n", + "The Python library `scikit-learn` provides the class `PCA`. This class provides methods to compute the optimal compression matrix $\\mathbf{W}_{\\rm PCA}$ of the data $\\mathbf{Z}$, as well as methods with which to perform PCA transformations and reconstructions. The image above shows how the methods of the `PCA` object correspond to the transformations presented in this notebook. Below, we give a brief description of the most important methods in the class.\n", + "\n", + "- `PCA.fit(Z)` calculates $\\mathbf{W}_{\\rm PCA}$ and stores it in the attribute `PCA.components_`. Note that the data matrix `Z` should be uncentered, since the `PCA` class stores the sample mean of the data that is used when transforming and reconstructing data with the fitted `PCA` object.\n", + "\n", + "- `PCA.transform(Z)` performs PCA transformation on `Z` in order to get the lower dimensional representation `X` of the data. Observe that the function performs the centralization of `Z` \"under the hood\", and as such the resulting compression is of the form $\\mathbf{X} = \\mathbf{Z}_c\\mathbf{W}^T$.\n", + "\n", + "- `PCA.inverse_transform(X)` performs the inverse operation to `PCA.transform(Z)`, that is, it takes as input the PCA transformed data `X` and returns a reconstrution `Z_hat` with the same dimensionality as the original data `Z`. Mathematically, the reconstruction is of the form $\\hat{\\mathbf{Z}} = \\mathbf{XW} + \\overline{\\mathbf{Z}}$, where each row of $\\overline{\\mathbf{Z}}$ is the sample mean." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "d02a37d58e56ef2d918849d62a9e0b09", + "grade": false, + "grade_id": "cell-0792063f0c3913c0", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='ImplementPCA'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <b>Student Task.</b> Compute PCA. <br/>\n", + "\n", + "Apply PCA to the data matrix $\\mathbf{Z}$ obtained from the fruit images. In particular, compute the optimal compression matrix `W_pca` for the specified number $n$ of PCs (the value of n is set in the code snippet below). \n", + "The compression matrix should be stored in the numpy array `W_pca` of shape $(n,2500)$. Also, compute the corresponding reconstruction error\n", + "\\begin{equation}\n", + "(1/m) \\sum_{i=1}^{m} \\big\\| \\mathbf{z}^{(i)} - \\hat{\\mathbf{z}}^{(i)} \\big\\|^{2}_{2}= \n", + "(1/m) \\sum_{i=1}^{m} \\big\\| \\mathbf{z}^{(i)} - \\mathbf{W}_{\\rm PCA}^{T} \\mathbf{W}_{\\rm PCA} \\mathbf{z}^{(i)} \\big\\|^{2}_{2} \n", + "\\end{equation} \n", + "\n", + "You should store the reconstruction error in variable `err_pca`. \n", + " \n", + "Hints: \n", + "- Use the Python class `PCA` from the library `sklearn.decomposition` to compute the optimal compression matrix $\\mathbf{W}_{\\rm PCA}$ for the feature vectors (representing fruit images) in the rows of `Z`.\n", + "- Use the functions described above to calculate the reconstruction `Z_hat`. You can take advantage of `pca.transform` and `pca.inverse_transform` functions.\n", + "- Finally, use the resulting `Z_hat` to calculate the reconstruction error according to the formula above.The easiest way to calculate the error might be to expand the formula for the reconstruction error above, and try to understand how this formula could be obtained using the elements of the matrix subtraction `Z - Z_hat`.\n", + "- documentation: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html\n", + " \n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "526b8cb3427f3d4ae6859951dcd740e4", + "grade": false, + "grade_id": "cell-3e96f85639fb5b97", + "locked": false, + "schema_version": 3, + "solution": true + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Shape of Z: (30, 2500)\n" + ] + }, + { + "ename": "NameError", + "evalue": "name 'W_pca' is not defined", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m<ipython-input-3-b109370def79>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf'Shape of Z: {Z.shape}'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 10\u001b[0;31m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf'Shape of compression matrix W: {W_pca.shape}'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 11\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf'PCA error: {err_pca}'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;31mNameError\u001b[0m: name 'W_pca' is not defined" + ] + } + ], + "source": [ + "from sklearn.decomposition import PCA\n", + "\n", + "n = 10 # Define the number of principal components\n", + "\n", + "### STUDENT TASK ###\n", + "# YOUR CODE HERE\n", + "\n", + "\n", + "print(f'Shape of Z: {Z.shape}')\n", + "print(f'Shape of compression matrix W: {W_pca.shape}')\n", + "print(f'PCA error: {err_pca}')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "873c7fe22fa7b9716fe8fbf60d152011", + "grade": true, + "grade_id": "cell-fe48a7f5675575ee", + "locked": true, + "points": 3, + "schema_version": 3, + "solution": false + } + }, + "outputs": [], + "source": [ + "# Perform sanity checks on the results\n", + "assert W_pca.shape == (n, Z.shape[1]), \"Output matrix (W_pca) dimensions are wrong.\"\n", + "assert err_pca <= 1e6, \"The reconstruction error is too high.\"\n", + "\n", + "print('Sanity checks passed!')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "1f850119c48796138b2bd39fa05f02c8", + "grade": false, + "grade_id": "cell-2164880970b6db31", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "## Interpretation of the compressed data and compression matrix\n", + "\n", + "While we know that the features of the original dataset $\\mathbf{Z}$ represents the intensities at each pixel, we have not considered how the compression matrix and compressed data can be interpreted. As was shown in the example earlier in the notebook, **each principle component $\\mathbf{u}^{(i)}$ stored in the rows of $\\mathbf{W}_{\\rm PCA}$ corresponds to a direction in the space that is spanned by the features of $\\mathbf{Z}$**. With the image data used in this notebook, these directions correspond to a vector of pixel values and can be represented as an image.\n", + "\n", + "By multiplying the centralized feature vector $\\mathbf{z}_c$ with the compression matrix $\\mathbf{W}_{\\rm PCA}$, we get the compressed vector $\\mathbf{x}$ in which each element $x_i$ represents the position of the centralized datapoint $\\mathbf{z}_c$ along the $i$:th principle component. In essence, **the values of the elements $x_i$ quantify the amount of the principle component $\\mathbf{u}^{(i)}$ present in the datapoint $\\mathbf{z}_c$**. With this in mind, it is very intuitive that the centralized reconstruction $\\hat{\\mathbf{z}}_c$ is given by the linear combination \n", + "\n", + "\\begin{equation}\n", + "\\hat{\\mathbf{z}}_c = \\mathbf{W}_{\\rm PCA}^T \\mathbf{x} = \\left[ \\mathbf{u}^{(1)}, \\mathbf{u}^{(2)}, \\ldots, \\mathbf{u}^{(n)}\\right] \\mathbf{x} = \\sum_{i=1}^n \\mathbf{u}^{(i)}x_i\n", + "\\end{equation}\n", + "\n", + "of the principle components $\\mathbf{u}^{(i)}$ in the rows of $\\mathbf{W}_{\\rm PCA}$ (and the columns of the transpose). In the case of the image data, this means that each image can be given as a linear combination of images corresponding to the principle components.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "092f4f2bec42d5436ffe746217a4fa77", + "grade": false, + "grade_id": "cell-4b57a2247992e427", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='reconstructionerror'></a>\n", + "<div class=\" alert alert-info\">\n", + " <b>Demo.</b> Principal Directions. <br/> \n", + " \n", + "Keeping the above in mind, it is instructive to examine the principle components $\\mathbf{u}^{(1)},\\mathbf{u}^{(2)}...,\\mathbf{u}^{(n)}$ in the rows of optimal compression matrix $\\mathbf{W}_{\\rm PCA}$. Since each successive principle components explains a decreasing amount of variance in the original data, it is especially interesting to examine the first principle components as they explain the largest amount of variance.\n", + "\n", + "The code snippet below plots the five first principal directions.\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "0aa04edc265f5b867b43b53aea71a75d", + "grade": false, + "grade_id": "cell-090d1f05afe9bc56", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "outputs": [], + "source": [ + "def plot_princ_comp(W_pca):\n", + " fig, ax = plt.subplots(1, 5, figsize=(15,15))\n", + " # Select the PCs we are plotting\n", + " # You can change these to see what other components look like\n", + " plot_pd = [0,1,2,3,4]\n", + "\n", + " for i in range(len(plot_pd)):\n", + " ax[i].imshow(np.reshape(W_pca[plot_pd[i]], (50,50)), cmap='gray')\n", + " ax[i].set_title(\"Principal Direction %d\"%(plot_pd[i] + 1))\n", + " ax[i].set_axis_off() # Remove x- and y-axis from each image\n", + " plt.show()\n", + "\n", + "plot_princ_comp(W_pca)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "76397d3e4930e0fc089b4f21bdc75bb8", + "grade": false, + "grade_id": "cell-95adb56538f4c2c8", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "Now we can try interpret the meaning of the principle directions from the images above. For example, the principle direction 1 seems to correspond to the level of some kind of \"appleness\" of the image, whereas the third direction seems to capture some kind of \"banananess\". Doing this kind of interpretation can help us in obtaining insight of a dataset by finding meaning in the directions of the largest variance. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "d4d0deaf46a35a7b98b9c7e04c7f0a6f", + "grade": false, + "grade_id": "cell-e84eac83a215b2b4", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "## Trading Compression Against Information Loss\n", + "<a id=\"Q2\"></a>\n", + "\n", + "Next, we study the effect of using different numbers $n$ of PCs for the new feature vector $\\mathbf{x}$. In particular, we will examine whether there is a trade-off between amount of compression (larger for smaller $n$) and the amount of information loss, which is quantified by the reconstruction error." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "76257dda8d7cbf77c6b2c1dad8b262e5", + "grade": false, + "grade_id": "cell-1a63c986f3d58d4c", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='reconstructionerror'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <p><b>Student Task.</b> Reconstruction Error vs. Number of PCs. <br/> </p>\n", + " \n", + "Use the fruit image data `Z` and analyze how the reconstruction error of PCA changes with the number of PCs n used. In particular, apply PCA to the dataset `Z` for varying number of PCs $n=1,\\ldots,m$. For each choice for $n$, compute the corresponding reconstruction error and store it in the numpy array `err_pca` of shape (m, ).\n", + "\n", + " \n", + " \n", + "Hints:\n", + " \n", + "For each number $n$ of PCs:\n", + " \n", + "- Compute the compressed dataset $\\mathbf{X}$.\n", + " \n", + "- Use $\\mathbf{X}$ to compute the optimal reconstruction $\\widehat{\\mathbf{Z}}$.\n", + " \n", + "- Use $\\widehat{\\mathbf{Z}}$ to compute the reconstruction error $(1/m) \\sum_{i=1}^{m} \\big\\| \\mathbf{z}^{(i)} - \\widehat{\\mathbf{z}}^{(i)}\\big\\|^{2}$.\n", + " \n", + "- Store the reconstruction error in the numpy array `err_pca`. The first entry should be the reconstruction error obtained for $n=1$.\n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "c6584465080a5dd9f1a23c5902ef0c6c", + "grade": false, + "grade_id": "cell-65dd9f2f576d010d", + "locked": false, + "schema_version": 3, + "solution": true + } + }, + "outputs": [], + "source": [ + "err_pca = np.zeros(m) # Array for storing the PCA errors\n", + "\n", + "for n_minus_1 in range(m):\n", + " ### STUDENT TASK ### \n", + " # Compute the reconstruction error for PCA using n PCs and store it in err_pca[n-1]\n", + " # YOUR CODE HERE\n", + " raise NotImplementedError()\n", + " \n", + "# Plot the number of PCs vs the reconstruction error\n", + "plt.figure(figsize=(8,5))\n", + "plt.plot([i + 1 for i in range(m)], err_pca)\n", + "plt.xlabel('Number of PCs ($n$)')\n", + "plt.ylabel(r'$\\mathcal{E}(\\mathbf{W}_{\\rm PCA})$')\n", + "plt.title('Number of PCs vs the reconstruction error')\n", + "plt.show() " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "8e5aa6af7c4d9f826a22d0c2d4811a4d", + "grade": true, + "grade_id": "cell-4c0ec1f5bc750139", + "locked": true, + "points": 3, + "schema_version": 3, + "solution": false + } + }, + "outputs": [], + "source": [ + "# Perform some sanity checks on the outputs\n", + "assert err_pca.shape == (m, ), \"shape of err_pca is wrong.\"\n", + "assert err_pca[0] > err_pca[m-1], \"values of err_pca are incorrect\"\n", + "assert err_pca[0] > err_pca[1], \"values of err_pca are incorrect\"\n", + "\n", + "print('Sanity checks passed!')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "e536cb43e2b84036f34476c139c7221b", + "grade": false, + "grade_id": "cell-dd63af0edecdc31f", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "If the task is solved correctly, we can see that the reconstruction error decreases quite smoothly with an increasing number of components. \n", + "\n", + "## Choosing the number of components in PCA\n", + "\n", + "An important decision that has to be made when applying PCA is the choice of the number of components $n$. The correct choice of $n$ is highly specific to the application, and there is no hard rule based on which to choose an objectively correct value. \n", + "\n", + "A simple but common approach for choosing $n$ is to choose a threshold proportion of total variance explained (e.g. 0.8 or 0.9), and select the smallest number of components such that their cumulative explained variance exceeds the threshold. By using this approach, we are effectively choosing an upper bound for the tolerated information loss." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "65da255494c0ba9470e04ca44e03c99f", + "grade": false, + "grade_id": "cell-9d7f977c8fa11c3f", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "<a id='explainedvariance'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <p><b>Student Task.</b> Proportion of variance explained. <br/> </p>\n", + " \n", + "Use the fruit image data `Z` and `PCA` to find the minimum number of components $n$ such that the cumulative proportion of explained variance of the $n$ first components exceeds $0.9$. \n", + " \n", + "You are required to store \n", + " \n", + "- an $(n_{\\rm max}, )$ array containing the **proportions of total variance explained** by each of the individual components in the variable `var_ratio`, \n", + "- an $(n_{\\rm max}, )$ array containing the **cumulative proportion of total variance explained** in the variable `cum_var_ratio`, \n", + "- the minimum number of components whose cumulative proportion of explained variance exceeds `threshold` in the variable `n_threshold`\n", + "\n", + " \n", + "**Hints**:\n", + " \n", + "- Fit a PCA model with `n_max` components.\n", + " \n", + "- The proportions of total variance explained by each component can be found in the attribute `PCA.explained_variance_ratio_`.\n", + " \n", + "- `np.cumsum()` is convenient for calculating the cumulative proportions of total variance explained.\n", + " \n", + "- `np.where(condition)` can be a useful tool for finding `n_threshold`.\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "cc92265c0422b1746e8367b74eefa342", + "grade": false, + "grade_id": "cell-84120c5379d42699", + "locked": false, + "schema_version": 3, + "solution": true, + "task": false + } + }, + "outputs": [], + "source": [ + "n_max = m # Maximum amount of components\n", + "threshold = 0.9 # Threshold for selecting the number of components\n", + "### STUDENT TASK ###\n", + "# ...\n", + "# var_ratio = ...\n", + "# cum_var_ratio = ...\n", + "# n_threshold = ...\n", + "# YOUR CODE HERE\n", + "raise NotImplementedError()\n", + " \n", + "print(f\"Number of components selected: {n_threshold}\")\n", + "print(f\"Proportion of variance explained: {cum_var_ratio[n_threshold-1]}\")\n", + "\n", + "# Plot the number of PCs vs the reconstruction error\n", + "fig, ax = plt.subplots(2, 1, figsize=(8,12))\n", + "x_bar = range(1, n_max+1)\n", + "ax[0].bar(x_bar, var_ratio)\n", + "ax[0].set_title(\"Proportion of explained variance\")\n", + "ax[0].set_xlabel(\"Component\")\n", + "ax[0].set_ylabel(\"Proportion\")\n", + "barlist = ax[1].bar(x_bar, cum_var_ratio)\n", + "barlist[n_threshold-1].set_color('b')\n", + "ax[1].plot([0,31], [0.9, 0.9], '--', color='black')\n", + "ax[1].set_title(\"Cumulative proportion of explained variance\")\n", + "ax[1].set_xlabel(\"Component\")\n", + "ax[1].set_ylabel(\"Proportion\")\n", + "plt.show() " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "b3dc894310e7ddc50f9bb8b6b1a6a254", + "grade": true, + "grade_id": "cell-e96a6ae4c245d3d7", + "locked": true, + "points": 2, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "outputs": [], + "source": [ + "# Perform some sanity checks on the outputs\n", + "assert var_ratio.shape == (30,), \"var_ratio is of the wrong shape!\"\n", + "assert cum_var_ratio.shape == (30,), \"cum_var_ratio is of the wrong shape!\"\n", + "assert n_threshold in range(10, 20), \"n_threshold is too low or too high!\"\n", + "\n", + "print(\"Sanity checks passed!\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, we will examine how increasing the number of components in the PCA transformation affects the quality of the reconstructions in practice. Will we be able to see the decrease in reconstruction error with our own eyes in the case of the fruit images?\n", + "\n", + "In the demo below, we will plot reconstructed images for PCA compressions with a different number of components. By relying on intuition and the results above, we should expect that a larger number of components results in a reconstruction that is closer to the original image." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "c66a0c741f432deb03bc7d0038231566", + "grade": false, + "grade_id": "cell-45b5e1214d2b641c", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='reconstructionerror'></a>\n", + "<div class=\" alert alert-info\">\n", + " <b>Demo.</b> Reconstructions. <br/> \n", + " \n", + "The code snippet below shows the reconstructions $\\widehat{\\mathbf{z}}^{(i)} = \\mathbf{W}_{\\rm PCA}^{T} \\mathbf{x}^{(i)} + \\overline{\\mathbf{z}}$ for the number of PCs $n=1,5,10,20,30$. \n", + "\n", + "In the code, the `PCA` class is only used to obtain the compression matrix $\\mathbf{W}_{\\rm PCA}$. The PCA transformation and reconstruction are then done by matrix operations.\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "b6bc8f6dc281670f4fd4862bdc0fc24d", + "grade": false, + "grade_id": "cell-925684dea4eb902f", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "outputs": [], + "source": [ + "from matplotlib.lines import Line2D\n", + "import matplotlib.gridspec as gridspec\n", + "import warnings\n", + "warnings.filterwarnings(\"ignore\")\n", + "\n", + "## Input:\n", + "# Z: Dataset\n", + "# n: number of dimensions\n", + "# m_pics: number of pics per class (Apple, Banana). Min 1, Max 15\n", + "def plot_reconstruct(Z, W_pca, n, m_pics=3):\n", + "\n", + " # Center Z\n", + " Z_centered = Z - np.mean(Z, axis=0)\n", + " # x=w*z\n", + " X_pca = np.matmul(W_pca[:n,:], Z_centered[:,:,None])\n", + " # x_reversed=r*x + mean(z)\n", + " Z_hat = np.matmul(W_pca[:n,:].T, X_pca)[:,:,0] + np.mean(Z, axis=0)\n", + " \n", + " # Setup figure size that scales with number of images\n", + " fig = plt.figure(figsize = (10,10))\n", + " \n", + " # Setup a (n_pics,2) grid of plots (images)\n", + " gs = gridspec.GridSpec(1, 2*m_pics)\n", + " gs.update(wspace=0.0, hspace=0.0)\n", + " for i in range(m_pics):\n", + " for j in range(0,2):\n", + " # Add a new subplot\n", + " ax = plt.subplot(gs[0, i+j*m_pics])\n", + " # Insert image data into the subplot\n", + " ax.imshow(np.reshape(Z_hat[i+(15*j)], (50,50)), cmap='gray', interpolation='nearest')\n", + " # Remove x- and y-axis from each plot\n", + " ax.set_axis_off()\n", + " \n", + " plt.subplot(gs[0,0]).set_title(\"Reconstructed images using %d PCs:\"%(n), size='large', y=1.08)\n", + " plt.show()\n", + " \n", + "pca = PCA(n_components=m) # create the object\n", + "pca.fit(Z) # compute optimal transform W_PCA\n", + "W_pca = pca.components_\n", + " \n", + "# The values of PCS n to plot for. You can change these to experiment\n", + "num_com = [1, 5, 10, 20, 30]\n", + "for n in num_com:\n", + " # If you want to print different amount of pictures, you can change the value of m_pics. (1-15)\n", + " plot_reconstruct(Z, W_pca, n, m_pics=3)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "fa82b659a4eb7c36c3347ef46745b717", + "grade": false, + "grade_id": "cell-6a62d117f2e7f864", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "Observing the reconstructed images, one can clearly see that the reconstructions improve with an increasing number of principal components. This is in line with the decreasing reconstruction error seen in the student task \"Reconstruction Error vs. Number of PC\"." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "d9cb396b60b29eefec761bda97c95c8f", + "grade": false, + "grade_id": "cell-44992b7e2e2296cb", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "## PCA for Data Visualization\n", + "<a id=\"Q3\"></a>\n", + "\n", + "An important application of PCA is data visualization. By using PCA to reduce the dimensionality of higher dimensional data to two or three dimensions, it is possible to visualize high dimensional data along those directions in which the data exhibits the largest variance. \n", + "\n", + "For example, when using PCA with $n=2$ PCs, the resulting feature vectors $\\mathbf{x}^{(1)} = \\mathbf{W}_{\\rm PCA} \\mathbf{z}_c^{(1)}, \\ldots, \\mathbf{x}^{(m)} = \\mathbf{W}_{\\rm PCA} \\mathbf{z}_c^{(m)} \\in \\mathbb{R}^{2}$ can be visualized in a two-dimensional scatterplot whose x-axis represents the first PC $x_{1}^{(i)}$ and y-axis the second PC $x_{2}^{(i)}$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "6f881e659f25b3864276af435ce4bdd5", + "grade": false, + "grade_id": "cell-6fa5af31e9012065", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='dataprojection'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <b>Student task.</b> Scatter Plots using Two PCs. <br/> \n", + "\n", + "The following code visualizes the data as a scatter plot using either the first two PCs $x_{1}$, $x_{2}$ or using the 8th and 9th PCs $x_{8}$ and $x_{9}$. Your task is to fit the PCA model, transform the data `Z` to the lower dimensional transformation `X`, and store the 1st and 2nd, and the 8th and 9th PCs in the variables `X_PC12` and `X_PC89` respectively. Once again, remember that the indexing starts from 0 in Python!\n", + " \n", + "The rest of the code plots two scatterplots using the selected pairs of features.\n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "ddc91348f72fe1ebdfefacce8c2f529e", + "grade": false, + "grade_id": "cell-0854759cfbcb921e", + "locked": false, + "schema_version": 3, + "solution": true, + "task": false + } + }, + "outputs": [], + "source": [ + "### STUDENT TASK ###\n", + "# Fit PCA with m components\n", + "# .\n", + "# .\n", + "# .\n", + "# X_PC12 = X[:,...]\n", + "# X_PC89 = X[:,...]\n", + "# YOUR CODE HERE\n", + "raise NotImplementedError()\n", + "\n", + "plt.rc('axes', labelsize=14) # Set fontsize of the x and y labels\n", + " \n", + "fig, ax = plt.subplots(2, 1, figsize=(8,12)) # Create two subplots\n", + "\n", + "# Scatterplot of the compressed data w.r.t PCs 1 and 2\n", + "ax[0].scatter(X_PC12[:15,0], X_PC12[:15,1], c='r',marker='o', label='Apple')\n", + "ax[0].scatter(X_PC12[15:,0], X_PC12[15:,1], c='y', marker='^', label='Banana')\n", + "ax[0].set_title('using first two PCs $x_{1}$ and $x_{2}$ as features')\n", + "ax[0].legend()\n", + "ax[0].set_xlabel('$x_{1}$')\n", + "ax[0].set_ylabel('$x_{2}$')\n", + " \n", + "# Scatterplot of the compressed data w.r.t PCs 7 and 8\n", + "ax[1].scatter(X_PC89[:15,0], X_PC89[:15,1], c='r', marker='o', label='Apple')\n", + "ax[1].set_title('using 8th and 9th PC as features')\n", + "ax[1].scatter(X_PC89[15:,0], X_PC89[15:,1], c='y', marker='^', label='Banana')\n", + "ax[1].legend()\n", + "ax[1].set_xlabel('$x_{8}$')\n", + "ax[1].set_ylabel('$x_{9}$')\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "670042342d204d202cf699b282e42bc4", + "grade": true, + "grade_id": "cell-a2687bd41e35d92d", + "locked": true, + "points": 1, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "outputs": [], + "source": [ + "# Perform some sanity checks on the results\n", + "assert X_PC12.shape == (30,2), f\"X_PC12 is of the wrong shape!\"\n", + "assert X_PC89.shape == (30,2), f\"X_PC12 is of the wrong shape!\"\n", + "\n", + "print(\"Sanity checks passed!\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "From the plots above, it is easy to confirm that the data exhibits the largest variance along the first principal component, with successive components explaining a decreasing amount of variance.\n", + "\n", + "By visualizing the data with respect to the first principle components, we can obtain insights of the data. For example, the first scatterplot seems to indicate that the images containing apples and bananas can be separated quite well using the first principle component. This coincides with what we inferred earlier in the notebook - that the first principle component corresponds to some kind of \"appleness\" of the image.\n", + "\n", + "It is hardly surprising that we could find this kind of separation in the case of the fruit image dataset, so it can seem that the scatterplot is not tremendously useful. However, on other datasets one might find less obvious relationships with respect to some principal components that might provide insights that are not easily obtainable otherwise!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "f744ecfbb7e7da5f8816d496304e7390", + "grade": false, + "grade_id": "cell-b70e976f9acfca7f", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "## Using Linear regression with PCA for House Price Prediction \n", + "\n", + "<a id=\"Q2\"></a>\n", + "\n", + "Recall from round 4 that a high dimensional predictor (e.g. linear regression with many features) is prone to overfitting the training set which results in a poor predictive capability on new data. In round 4, we examined two methods of mitigating overfitting - choosing a model with only some features, and regularization. PCA compression provides a possible alternative to the former. Instead of selecting $n$ of the original features to be used in our model, we can use the $n$ first principle components as the features with respect to which we train our linear classifier.\n", + "\n", + "We now show how PCA can be used as a pre-processing step for another ML method, such as linear regression. To this end, we consider the task of predicting the price $\\mathbf{y}$ of a house based on several features $x_1,\\ldots, x_n$ of this house. In \"ML language\", the goal is to learn a good predictor $h(\\mathbf{x})$ for the price $y$ of a house. The prediction $h(\\mathbf{x})$ is based on the features $\\mathbf{x} = \\big(x_{1},\\ldots,x_{n}\\big)^{T}$ such as the average number of rooms per dwelling $x_{1}$ or the nitric oxides concentration $x_{2}$ near the house. \n", + "\n", + "### The Data\n", + "\n", + "To evaluate the quality of a predictor $h(\\mathbf{x})$, we evaluate its error (loss) on historic recordings of house sales (for which we know the price in hindsight). These recordings consist of $m$ data points. Each data point is characterized by the house features $\\mathbf{x}^{(i)} \\in \\mathbb{R}^{n}$ and the selling price $y^{(i)}$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "08dccfd92012cc438183cb35fba1e002", + "grade": false, + "grade_id": "cell-20a3b32c473f955e", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='handsondata'></a>\n", + "<div class=\" alert alert-info\">\n", + "<p><b>Demo.</b> Loading the Data.</p>\n", + " \n", + "The following code snippet defines a function `Z,y= GetFeaturesLabels(m,D)` which reads in data of previous house sales. The input parameters are the number `m` of data points and the number `D` of features to be used for each data point. The function returns a matrix $\\mathbf{Z}$ and vector $\\mathbf{y}$. \n", + "\n", + "The features $\\mathbf{z}^{(i)}$ of the sold houses are stored in the rows of the numpy array `Z` (of shape (m,D)) and the corresponding selling prices $y^{(i)}$ in the numpy array `y` (shape (m,1)). The two arrays represent the feature matrix $\\mathbf{Z} = \\begin{pmatrix} \\mathbf{z}^{(1)} & \\ldots & \\mathbf{z}^{(m)} \\end{pmatrix}^{T}$ and the label vector $\\mathbf{y} = \\big( y^{(1)}, \\ldots, y^{(m)} \\big)^{T}$. \n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "fba769478734be45dccfd6741d5d199b", + "grade": false, + "grade_id": "cell-ac3ef96fb0f17261", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "outputs": [], + "source": [ + "# import \"Pandas\" library/package (and use shorthand \"pd\" for the package) \n", + "# Pandas provides functions for loading (storing) data from (to) files\n", + "import pandas as pd \n", + "from matplotlib import pyplot as plt \n", + "from IPython.display import display, HTML\n", + "import numpy as np \n", + "import random\n", + "\n", + "def GetFeaturesLabels(m=10, D=10):\n", + " house_dataset = load_boston()\n", + " house = pd.DataFrame(house_dataset.data, columns=house_dataset.feature_names) \n", + " x1 = house['TAX'].values.reshape(-1,1) # vector whose entries are the tax rates of the sold houses\n", + " x2 = house['AGE'].values.reshape(-1,1) # vector whose entries are the ages of the sold houses\n", + " x1 = x1[0:m]\n", + " x2 = x2[0:m]\n", + " np.random.seed(43)\n", + " Z = np.hstack((x1,x2,np.random.randn(m,D))) \n", + " \n", + " Z = Z[:,0:D] \n", + "\n", + " y = house_dataset.target.reshape(-1,1) # create a vector whose entries are the labels for each sold house\n", + " y = y[0:m]\n", + " \n", + " return Z, y" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "0512c8d4399d61c38a135509d52e5bf4", + "grade": false, + "grade_id": "cell-996a5a4dccea7328", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='pcaandlinreg'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <p><b>Student Task.</b> PCA with Linear Regression. <br/> </p>\n", + "\n", + "In the code snippet below we combine PCA with linear regression, and calculate the training and validation errors for the linear model trained on PCA transformed data for different numbers of PCs.\n", + " \n", + "First, we read in $m=500$ data points representing house sales. Each house sale is represented by a long feature vector $\\mathbf{z}^{(i)}\\in \\mathbb{R}^{D}$ of length $D=10$. \n", + " \n", + "Next, we define the feature matrix `Z_pca` that contains the features 480 first data points and which will be used to fit the PCA model. We also define the feature matrix `Z_reg` and label vector `y_reg` containing the features and labels of the rest of the data points. These will be used for training and validating a linear regression model.\n", + "\n", + "Furthermore, in order be able to estimate of the generalization capability of the linear models trained on PCA transformed data with a different number of PCs we will split the regression dataset `Z_reg`, `y_reg` into a training set `Z_train`, `y_train` and validation set `Z_val`, `y_val`.\n", + " \n", + "**Your task** is to implement the contents of the loop that calculates the training and validation errors for the linear models trained on PCA transformed data with a different number $n=1,\\ldots,D$ of components. For each $n$ you should:\n", + " \n", + "- Fit a PCA model with $n$ components\n", + " \n", + "- Transform the feature matrices `Z_train` and `Z_val` to lower dimensional versions using PCA\n", + " \n", + "- Use the PCA transformed training data to fit a linear regression model. When initializating `LinearRegression` please use `LinearRegression(fit_intercept=True)`\n", + " \n", + "- Calculate the training and validation errors (MSE) of the linear regression and store these at index `n-1` in the arrays `err_train` and `err_val` respectively. Remember to use the PCA transformed validation features to predict the labels of the validation set!\n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "171e71637253700e008bf80c6cec8fb4", + "grade": false, + "grade_id": "cell-dd44ad8b1d0fc8db", + "locked": false, + "schema_version": 3, + "solution": true + } + }, + "outputs": [], + "source": [ + "from sklearn.linear_model import LinearRegression\n", + "from sklearn.model_selection import train_test_split\n", + "from sklearn.datasets import load_boston\n", + "from sklearn.metrics import mean_squared_error\n", + "\n", + "m = 500 # total number of data points \n", + "D = 10 # length of raw feature vector \n", + "\n", + "Z, y = GetFeaturesLabels(m, D) # read in m data points from the house sales database \n", + " \n", + "# Use these features for PCA \n", + "Z_pca = Z[:480,:] # read out feature vectors of first 480 data points \n", + "\n", + "# Use these features and labels for linear regression (with transformed features)\n", + "Z_reg = Z[480:,:] # Read out feature vectors of last 20 data points \n", + "y_reg = y[480:,:] # Read out labels of last 20 data points \n", + "\n", + "# Datasets which will be preprocessed with PCA and used with linear regression\n", + "Z_train, Z_val, y_train, y_val = train_test_split(Z_reg, y_reg, test_size=0.2, random_state=42)\n", + "\n", + "err_train = np.zeros(D) # Array for storing training errors of the linear regression model\n", + "err_val = np.zeros(D) # Array for storing validation errors of the linear regression model\n", + "\n", + "for n in range(1, D+1):\n", + " # Create the PCA object and fit\n", + " # transform long feature vectors (length D) to short ones (length n)\n", + " ### STUDENT TASK ### \n", + " # -Fit a PCA model with n components\n", + " # -Compress Z_train and Z_val using PCA \n", + " # -Use the compressed features to fit a linear model\n", + " # -Calculate the training and validation errors of the \n", + " # YOUR CODE HERE\n", + " raise NotImplementedError()\n", + "\n", + "# Plot the training and validation errors\n", + "plt.figure(figsize=(8,5))\n", + "plt.plot(range(1, D+1), err_val, label=\"validation\")\n", + "plt.plot(range(1, D+1), err_train, label=\"training\")\n", + "plt.xlabel('number of PCs ($n$)')\n", + "plt.ylabel(r'error')\n", + "plt.legend()\n", + "plt.title('validation/training error vs. number of PCs')\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "65c31f55422b5dc2330f6e4b4d9ac402", + "grade": true, + "grade_id": "cell-5bc71b4230da31ed", + "locked": true, + "points": 3, + "schema_version": 3, + "solution": false + } + }, + "outputs": [], + "source": [ + "# Perform sanity checks on the outputs\n", + "assert err_train[0] >= err_train[4], \"values in err_train wrong\"\n", + "assert err_val[1] <= err_val[4], \"values in err_val wrong\"\n", + "\n", + "\n", + "print('Sanity checks passed!')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "9b5426ab2ec358668c11ece8c032ac10", + "grade": false, + "grade_id": "cell-5c325e740c3059c3", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "If the task is correctly solved, one should see that while the training error decreases monotonically with an increasing number of components, the validation error does not. Rather, there is an optimum number of components to be used. This is similar to what was observed in round 4, where we searched for the optimum number of features to be used for predicting the grayscale values of pixels. The difference is that in the task above we use PCA for each $n$ to obtain the $n$ features that capture as much of the variance of the original data as possible (for $n$ features).\n", + "\n", + "While this technique seems very convenient based on this example, there are some caveats that are good to be aware of. The first, and perhaps most important in an ML setting, is that **even though the PCA features maximize the explained variance in the data $\\mathbf{Z}$ they do not necessarily contain the most infomation w.r.t. to the target $\\mathbf{y}$**. In fact, PCA pre-preprocessing might even worsen outcomes by losing information that is relevant w.r.t. the target variable.\n", + "\n", + "The second major caveat is that PCA pre-processing might result in features that are difficult to interpret, which reduces the ability to infer relationships between real-life features and the target. This is perhaps less important in the realm of machine learning where interpretibility is not always a priority, but it is nevertheless good to be aware of this.\n", + "\n", + "Overall, Pre-processing with PCA can be useful in many situations, but due to its caveats it is best not to use it indiscriminately. It is better to evaluate its use on a case-by-case basis, and apply it in your final model if its use results in a better predictive capability." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "cedecf02062de2c3d568bcaae3eada7b", + "grade": false, + "grade_id": "cell-1a277e08a4fc39b4", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "## The Final Quiz" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "4c044d28435b47af480979cd2b731fff", + "grade": false, + "grade_id": "cell-75f0e24b621cdbfa", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='QuestionR6_1'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <p><b>Student Task.</b> Question R6.1. </p>\n", + "\n", + " <p>Which of the following statements is true?</p>\n", + "\n", + "<ol>\n", + " <li> Dimensionality reduction helps to avoid overfitting.</li>\n", + " <li> Dimensionality reduction can only be used for labeled data points.</li>\n", + " <li> Dimensionality reduction can only be used for vectors having no more than $100$ entries.</li>\n", + " <li> Dimensionality reduction is a classification method.</li>\n", + "</ol> \n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "ae20ca729f574283b39af1abc46299bd", + "grade": false, + "grade_id": "cell-8502c4de767cf645", + "locked": false, + "schema_version": 3, + "solution": true + } + }, + "outputs": [], + "source": [ + "# answer_R6_Q1 = ...\n", + "# YOUR CODE HERE\n", + "answer_R6_Q1 = 1" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "34e3a546a60f44a88893da1a83a234ed", + "grade": true, + "grade_id": "cell-5eae8fef7c012c4e", + "locked": true, + "points": 1, + "schema_version": 3, + "solution": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Sanity check tests passed!\n" + ] + } + ], + "source": [ + "# this cell is for tests\n", + "assert answer_R6_Q1 in [1,2,3,4], '\"answer_R6_Q1\" Value should be an integer between 1 and 4.'\n", + "print('Sanity check tests passed!')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "f22f74e5bcbc9da10d1fcf7032ae59c4", + "grade": false, + "grade_id": "cell-74fbeaadae128433", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='QuestionR6_2'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <p><b>Student Task.</b> Question R6.2. </p>\n", + "\n", + " <p>Which of the following statements is true?</p>\n", + "\n", + "<ol>\n", + " <li> In general, dimensionality reduction increases the computational requirements of fitting a model on the data.</li>\n", + " <li> No information in the original data is lost when performing dimensionality reduction via PCA.</li>\n", + " <li> In general, dimensionality reduction reduces the computational requirements of fitting a model on the data.</li>\n", + " <li> PCA cannot be used for data visualization.</li>\n", + "</ol> \n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "db84b2dc8c17c572aa42b0b78e599a61", + "grade": false, + "grade_id": "cell-aa7254f504d06bd6", + "locked": false, + "schema_version": 3, + "solution": true + } + }, + "outputs": [], + "source": [ + "# answer_R6_Q2 = ...\n", + "# YOUR CODE HERE\n", + "answer_R6_Q2 = 3" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "86043b4a16ac8e0f7f67e5361f987c1a", + "grade": true, + "grade_id": "cell-d73c926ba7d72ae2", + "locked": true, + "points": 1, + "schema_version": 3, + "solution": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Sanity check tests passed!\n" + ] + } + ], + "source": [ + "# this cell is for tests\n", + "assert answer_R6_Q2 in [1,2,3,4], '\"answer_R6_Q2\" Value should be an integer between 1 and 4.'\n", + "print('Sanity check tests passed!')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "e6e1dbff9a9ea55a9171ff50cc79e5d2", + "grade": false, + "grade_id": "cell-d5d1e56fd6f7109c", + "locked": true, + "schema_version": 3, + "solution": false + } + }, + "source": [ + "<a id='QuestionR6_3'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <p><b>Student Task.</b> Question R6.3. </p>\n", + "\n", + " <p>Consider using PCA as preprocessing to transform long raw feature vectors $\\mathbf{z} \\in \\mathbb{R}^{D}$ to shorter feature vectors $\\mathbf{x} \\in \\mathbb{R}^{n}$ that are then used in linear regression. What is the effect of using a larger number $n$ of principal components in PCA?</p>\n", + "\n", + "<ol>\n", + " <li> The training error obtained from linear regression will decrease.</li>\n", + " <li> The training error obtained from linear regression will increase.</li>\n", + "</ol> \n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "8cc75b2e0c8f17ff230c0cb0d35d125a", + "grade": false, + "grade_id": "cell-5910f33ccb2a7e21", + "locked": false, + "schema_version": 3, + "solution": true + } + }, + "outputs": [], + "source": [ + "# answer_R6_Q3 = ...\n", + "# YOUR CODE HERE\n", + "answer_R6_Q3 = 1" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "7f280ab4c9f5d78b17e154b16de1d4ed", + "grade": true, + "grade_id": "cell-c839258ed02d6407", + "locked": true, + "points": 1, + "schema_version": 3, + "solution": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Sanity check tests passed!\n" + ] + } + ], + "source": [ + "# this cell is for tests\n", + "assert answer_R6_Q3 in [1,2], '\"answer_R6_Q3\" Value should be an integer between 1 and 2.'\n", + "print('Sanity check tests passed!')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "markdown", + "checksum": "bf0ebaa2de1491ab72ea9ac21fc572bb", + "grade": false, + "grade_id": "cell-719534be8870a6d6", + "locked": true, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "source": [ + "<a id='QuestionR6_3'></a>\n", + "<div class=\" alert alert-warning\">\n", + " <p><b>Student Task.</b> Question R6.4. </p>\n", + "\n", + " <p>Consider using PCA as preprocessing to transform long raw feature vectors $\\mathbf{z} \\in \\mathbb{R}^{D}$ to shorter feature vectors $\\mathbf{x} \\in \\mathbb{R}^{n}$ that are then used in linear regression. Will this always result in better performance on new data for a linear regression model?</p>\n", + "\n", + "<ol>\n", + " <li> No.</li>\n", + " <li> Yes.</li>\n", + "</ol> \n", + "\n", + "</div>" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "deletable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "90604076301c53038cb9592871cf264f", + "grade": false, + "grade_id": "cell-815f1b53c548da39", + "locked": false, + "schema_version": 3, + "solution": true, + "task": false + } + }, + "outputs": [], + "source": [ + "# answer_R6_Q4 = ...\n", + "# YOUR CODE HERE\n", + "answer_R6_Q4 = 1" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "deletable": false, + "editable": false, + "nbgrader": { + "cell_type": "code", + "checksum": "e132ae8bd281328dbc81071e63b7c20a", + "grade": true, + "grade_id": "cell-c2e7f22d5a3da9ee", + "locked": true, + "points": 1, + "schema_version": 3, + "solution": false, + "task": false + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Sanity check tests passed!\n" + ] + } + ], + "source": [ + "# this cell is for tests\n", + "assert answer_R6_Q4 in [1,2], '\"answer_R6_Q4\" Value should be an integer between 1 and 2.'\n", + "print('Sanity check tests passed!')\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.3" + }, + "varInspector": { + "cols": { + "lenName": 16, + "lenType": 16, + "lenVar": 40 + }, + "kernels_config": { + "python": { + "delete_cmd_postfix": "", + "delete_cmd_prefix": "del ", + "library": "var_list.py", + "varRefreshCmd": "print(var_dic_list())" + }, + "r": { + "delete_cmd_postfix": ") ", + "delete_cmd_prefix": "rm(", + "library": "var_list.r", + "varRefreshCmd": "cat(var_dic_list()) " + } + }, + "types_to_exclude": [ + "module", + "function", + "builtin_function_or_method", + "instance", + "_Feature" + ], + "window_display": false + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}