diff --git a/README.md b/README.md index 18c6ee2..71e552b 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,8 @@ # Kaggle ![](static/images/logos/kaggle-logo-gray-bigger.jpeg) +* [ApacheCN 开源组织](https://github.com/apachecn/organization): https://github.com/apachecn/organization + > **欢迎任何人参与和完善:一个人可以走的很快,但是一群人却可以走的更远** * ApacheCN - Kaggle组队群【686932392】ApacheCN - Kaggle组队群【686932392】 * [Kaggle](https://www.kaggle.com) 是一个流行的数据科学竞赛平台。 @@ -15,6 +17,16 @@ ## [竞赛](https://www.kaggle.com/competitions) +* 【推荐】特征工程全过程: https://www.cnblogs.com/jasonfreak/p/5448385.html + +> train loss 与 test loss 结果分析 + +* train loss 不断下降,test loss不断下降,说明网络仍在学习; +* train loss 不断下降,test loss趋于不变,说明网络过拟合; +* train loss 趋于不变,test loss不断下降,说明数据集100%有问题; +* train loss 趋于不变,test loss趋于不变,说明学习遇到瓶颈,需要减小学习率或批量数目; +* train loss 不断上升,test loss不断上升,说明网络结构设计不当,训练超参数设置不当,数据集经过清洗等问题。 + ``` 机器学习比赛,奖金很高,业界承认分数。 现在我们已经准备好尝试 Kaggle 竞赛了,这些竞赛分成以下几个类别。 @@ -31,6 +43,7 @@ * [**数字识别**](/competitions/getting-started/digit-recognizer) * [**泰坦尼克**](/competitions/getting-started/titanic) * [**房价预测**](/competitions/getting-started/house-price) +* [**nlp-情感分析**](/competitions/getting-started/word2vec-nlp-tutorial) > [第3部分:训练场 Playground](https://www.kaggle.com/competitions?sortBy=deadline&group=all&page=1&pageSize=20&segment=playground) @@ -134,15 +147,3 @@ * 企鹅: 529815144(片刻) 1042658081(那伊抹微笑) 190442212(瑶妹) * **ApacheCN - 学习机器学习群【629470233】ApacheCN - 学习机器学习群【629470233】** * **Kaggle (数据科学竞赛平台) | [ApacheCN(apache中文网)](http://www.apachecn.org/)** - -## [ApacheCN 组织资源](http://www.apachecn.org/) - -> [kaggle: 机器学习竞赛](https://github.com/apachecn/kaggle) - -| 深度学习 | 机器学习 | 大数据 | 运维工具 | -| --- | --- | --- | --- | -| [TensorFlow R1.2 中文文档](http://cwiki.apachecn.org/pages/viewpage.action?pageId=10030122) | [机器学习实战-教学](https://github.com/apachecn/MachineLearning) | [Spark 2.2.0和2.0.2 中文文档](http://spark.apachecn.org/) | [Zeppelin 0.7.2 中文文档](http://cwiki.apachecn.org/pages/viewpage.action?pageId=10030467) | -| [Pytorch 0.3 中文文档 ](http://pytorch.apachecn.org/cn/0.3.0/) | [Sklearn 0.19 中文文档](http://sklearn.apachecn.org/) | [Storm 1.1.0和1.0.1 中文文档](http://storm.apachecn.org/) | [Kibana 5.2 中文文档](http://cwiki.apachecn.org/pages/viewpage.action?pageId=8159377) | -| | [LightGBM 中文文档](http://lightgbm.apachecn.org/cn/latest) | [Kudu 1.4.0 中文文档](http://cwiki.apachecn.org/pages/viewpage.action?pageId=10813594) | | -| | [XGBoost 中文文档](http://xgboost.apachecn.org/cn/latest) | [Elasticsearch 5.4 中文文档](http://cwiki.apachecn.org/pages/viewpage.action?pageId=4260364) | -| | | [Beam 中文文档](http://beam.apachecn.org/) | diff --git a/competitions/getting-started/word2vec-nlp-tutorial/NLP电影预测.ipynb b/competitions/getting-started/word2vec-nlp-tutorial/NLP电影预测.ipynb new file mode 100644 index 0000000..f35be15 --- /dev/null +++ b/competitions/getting-started/word2vec-nlp-tutorial/NLP电影预测.ipynb @@ -0,0 +1,3080 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# **word2vec nlp tutorial**\n", + "\n", + "> 比赛说明\n", + "\n", + "标记的数据集由50,000条IMDB电影评论组成,专门用于情感分析。评论的情感是二元的,这意味着IMDB评分<5导致情绪分数为0,并且评分≥7的情绪评分为1.没有单独的电影具有超过30个评论。25,000个带有复审标签的训练集不包含与25,000个复习测试集相同的电影。此外,还有另外50,000个IMDB评论没有提供任何评级标签。\n", + "\n", + "> 文件说明\n", + "\n", + "| 文件 | 说明 |\n", + "| :--- | :--- |\n", + "| labeledTrainData | 标记的训练集。该文件是制表符分隔的,并且有一个标题行,后面跟着25,000行,其中包含每个审阅的ID,情绪和文本。 |\n", + "| testData | 测试集。制表符分隔的文件有一个标题行,后面跟着25,000行,其中包含每个评论的标识和文本。你的任务是预测每个人的情绪。 |\n", + "| unlabeledTrainData | 一个没有标签的额外训练集。制表符分隔的文件有一个标题行,后跟50,000行,每行包含一个标识和文本。 |\n", + "| sampleSubmission | 以正确格式的逗号分隔的示例提交文件。 |\n", + "\n", + "> 数据字段\n", + "\n", + "| 字段 | 说明 |\n", + "| :--- | :--- |\n", + "| id | 每个评论的唯一ID |\n", + "| sentiment | 审查的情绪; 1为正面评论,0为负面评论 |\n", + "| review | 审查的文本 |\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 比赛操作流程\n", + "\n", + "分类问题:预测的是好与坏的问题\n", + "常用算法: K紧邻(knn)、逻辑回归(LogisticRegression)、随机森林(RandomForest)、支持向量机(SVM)、xgboost、GBDT\n", + "\n", + "> 步骤:\n", + "\n", + "```\n", + "一. 数据分析\n", + "1. 下载并加载数据\n", + "2. 总体预览:了解每列数据的含义,数据的格式等\n", + "3. 数据初步分析,使用统计学与绘图:初步了解数据之间的相关性,为构造特征工程以及模型建立做准备\n", + "\n", + "二. 特征工程\n", + "1.根据业务,常识,以及第二步的数据分析构造特征工程.\n", + "2.将特征转换为模型可以辨别的类型(如处理缺失值,处理文本进行等)\n", + "\n", + "三. 模型选择\n", + "1.根据目标函数确定学习类型,是无监督学习还是监督学习,是分类问题还是回归问题等.\n", + "2.比较各个模型的分数,然后取效果较好的模型作为基础模型.\n", + "\n", + "四. 模型融合\n", + "\n", + "五. 修改特征和模型参数\n", + "1.可以通过添加或者修改特征,提高模型的上限.\n", + "2.通过修改模型的参数,是模型逼近上限\n", + "```\n", + "\n", + "* * * \n", + "\n", + "* 比赛地址: https://www.kaggle.com/c/word2vec-nlp-tutorial\n", + "* 参考地址: https://www.cnblogs.com/zhao441354231/p/6056914.html\n", + "* 参考地址: https://blog.csdn.net/lijingpengchina/article/details/52250765\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 一.数据分析\n", + "\n", + "### 数据下载和加载\n", + "\n", + "* 数据集下载地址: https://www.kaggle.com/c/word2vec-nlp-tutorial/data\n" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "# 导入相关数据包\n", + "import pandas as pd\n", + "import numpy as np\n", + "from bs4 import *" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 读取数据" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "data_dir = \"/opt/data/kaggle/getting-started/word2vec-nlp-tutorial\"\n", + "# 载入数据集 \n", + "train = pd.read_csv(os.path.join(data_dir, 'labeledTrainData.tsv'), header=0, delimiter=\"\\t\", quoting=3)\n", + "pre = pd.read_csv(os.path.join(data_dir, 'testData.tsv'), header=0, delimiter=\"\\t\", quoting=3)" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(25000, 3) \t (25000, 2) \t\n", + "\n", + "RangeIndex: 25000 entries, 0 to 24999\n", + "Data columns (total 3 columns):\n", + "id 25000 non-null object\n", + "sentiment 25000 non-null int64\n", + "review 25000 non-null object\n", + "dtypes: int64(1), object(2)\n", + "memory usage: 586.0+ KB\n", + "None \n", + "\n", + "\n", + " ['id' 'sentiment' 'review']\n", + "\n", + " id sentiment review\n", + "0 \"5814_8\" 1 \"With all this stuff going down at the moment ...\n", + "1 \"2381_9\" 1 \"\\\"The Classic War of the Worlds\\\" by Timothy ...\n", + "2 \"7759_3\" 0 \"The film starts with a manager (Nicholas Bell...\n" + ] + } + ], + "source": [ + "print(train.shape, '\\t', pre.shape, '\\t')\n", + "print(train.info(), '\\n')\n", + "print('\\n', train.columns.values)\n", + "print('\\n', train.head(3))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 数据预处理\n", + "\n", + "* 1.去掉html标签\n", + "* 2.移除标点\n", + "* 3.切分成词/token\n", + "* 4.去掉停用词\n", + "* 5.重组为新的句子" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "# def review_to_wordlist(review):\n", + "# '''\n", + "# 把IMDB的评论转成词序列\n", + "# 参考:http://blog.csdn.net/longxinchen_ml/article/details/50629613\n", + "# '''\n", + "# # 去掉HTML标签,拿到内容\n", + "# review_text = BeautifulSoup(review, \"html.parser\").get_text()\n", + "# # 用正则表达式取出符合规范的部分\n", + "# review_text = re.sub(\"[^a-zA-Z]\", \" \", review_text)\n", + "# # 小写化所有的词,并转成词list\n", + "# words = review_text.lower().split()\n", + "# # 返回words\n", + "# return words\n", + "\n", + "\n", + "# 预处理数据\n", + "label = train['sentiment']\n", + "train_data = []\n", + "pre_data = []\n", + "for i in range(len(train['review'])):\n", + " train_data.append(BeautifulSoup(train['review'][i], \"html.parser\").get_text())\n", + "test_data = []\n", + "for i in range(len(pre['review'])):\n", + " pre_data.append(BeautifulSoup(pre['review'][i], \"html.parser\").get_text())" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\"With all this stuff going down at the moment with MJ i've started listening to his music, watching the odd documentary here and there, watched The Wiz and watched Moonwalker again. Maybe i just want to get a certain insight into this guy who i thought was really cool in the eighties just to maybe make up my mind whether he is guilty or innocent. Moonwalker is part biography, part feature film which i remember going to see at the cinema when it was originally released. Some of it has subtle messages about MJ's feeling towards the press and also the obvious message of drugs are bad m'kay.Visually impressive but of course this is all about Michael Jackson so unless you remotely like MJ in anyway then you are going to hate this and find it boring. Some may call MJ an egotist for consenting to the making of this movie BUT MJ and most of his fans would say that he made it for the fans which if true is really nice of him.The actual feature film bit when it finally starts is only on for 20 minutes or so excluding the Smooth Criminal sequence and Joe Pesci is convincing as a psychopathic all powerful drug lord. Why he wants MJ dead so bad is beyond me. Because MJ overheard his plans? Nah, Joe Pesci's character ranted that he wanted people to know it is he who is supplying drugs etc so i dunno, maybe he just hates MJ's music.Lots of cool things in this like MJ turning into a car and a robot and the whole Speed Demon sequence. Also, the director must have had the patience of a saint when it came to filming the kiddy Bad sequence as usually directors hate working with one kid let alone a whole bunch of them performing a complex dance scene.Bottom line, this movie is for people who like MJ on one level or another (which i think is most people). If not, then stay away. It does try and give off a wholesome message and ironically MJ's bestest buddy in this movie is a girl! Michael Jackson is truly one of the most talented people ever to grace this planet but is he guilty? Well, with all the attention i've gave this subject....hmmm well i don't know because people can be different behind closed doors, i know this for a fact. He is either an extremely nice but stupid guy or one of the most sickest liars. I hope he is not the latter.\" \n", + "\n", + "\"Naturally in a film who's main themes are of mortality, nostalgia, and loss of innocence it is perhaps not surprising that it is rated more highly by older viewers than younger ones. However there is a craftsmanship and completeness to the film which anyone can enjoy. The pace is steady and constant, the characters full and engaging, the relationships and interactions natural showing that you do not need floods of tears to show emotion, screams to show fear, shouting to show dispute or violence to show anger. Naturally Joyce's short story lends the film a ready made structure as perfect as a polished diamond, but the small changes Huston makes such as the inclusion of the poem fit in neatly. It is truly a masterpiece of tact, subtlety and overwhelming beauty.\"\n" + ] + } + ], + "source": [ + "# 预览数据\n", + "print(train_data[0], '\\n')\n", + "print(pre_data[0])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 特征处理" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "# 合并训练和测试集以便进行TFIDF向量化操作\n", + "data_all = train_data + pre_data\n", + "len_train = len(train_data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "直接丢给计算机这些词文本,计算机是无法计算的,因此我们需要把文本转换为向量,有几种常见的文本向量处理方法,比如: \n", + "\n", + "1. Bow-of-Words计数 \n", + "2. TF-IDF向量 \n", + "3. Word2vec向量 " + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['have', 'yourselves', 'itself', \"haven't\", 'y', 'shan', 'because', 'didn', 'it', \"she's\", 'nor', 'once', 'hadn', 'an', 'will', 'in', 'than', 'just', \"doesn't\", 'down', \"mightn't\", 've', 'shouldn', 'before', 'when', 'and', 'won', 'which', \"wouldn't\", 'other', 'are', 'doesn', 'here', 'him', 'why', \"mustn't\", 'theirs', 'ours', 'himself', 'now', 'at', 'but', 'its', 'were', 'whom', 'how', 'again', 'under', 'myself', 'me', 'your', 'then', 'he', 'the', 'who', 'herself', 'off', 'aren', 'each', 'same', 'all', \"that'll\", 'so', 'having', 'that', 'couldn', 'she', 'wasn', 'own', \"shouldn't\", 'by', 'there', 'this', 'we', 'if', 'no', 'doing', 'don', 'ain', \"you've\", 'had', 't', 'into', 'too', 'hasn', 'they', 'few', 'their', 'being', 'mightn', \"you'd\", 'a', 'her', \"couldn't\", 'did', \"you'll\", 'd', 'can', 'been', 'm', 'yours', 'very', 'wouldn', 'i', 'his', 'during', 'through', 'you', 'against', 'be', 'themselves', 'not', 'out', \"don't\", 'is', \"it's\", 'was', 'does', 'ma', 'needn', 'these', 'some', 'on', \"isn't\", 'for', 'further', \"hadn't\", 'isn', 'below', 'more', \"didn't\", 'has', 'up', 'with', 'about', \"weren't\", 'am', 'those', 'where', 'what', 'any', 's', \"you're\", 'do', 'or', 'over', 'weren', 'my', 'until', 'as', 'most', 'only', \"should've\", 'ourselves', \"needn't\", 'haven', 'above', 'such', 'hers', \"shan't\", 'after', 'while', \"wasn't\", 'them', 'between', 'our', 'from', 'yourself', \"aren't\", 'should', 'mustn', \"hasn't\", \"won't\", 'to', 're', 'of', 'both', 'o', 'll']\n" + ] + } + ], + "source": [ + "from nltk.corpus import stopwords\n", + "#英文停止词,set()集合函数消除重复项\n", + "list_stopWords = list(set(stopwords.words('english')))\n", + "print(list_stopWords)" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": {}, + "outputs": [], + "source": [ + "from gensim import corpora\n", + "\n", + "# bow 模型 \n", + "import re\n", + "texts = [[word for word in re.sub(\"[^a-zA-Z]\", \" \", doc.lower()) if word != \"\" and word not in list_stopWords] for doc in data_all]\n", + "dictionary = corpora.Dictionary(texts)\n", + "# 对每一行的单词,进行统计出现次数\n", + "corpus = [dictionary.doc2bow(text) for text in texts]" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0 \n", + "1 b\n", + "2 c\n", + "3 e\n", + "4 f\n", + "[(0, 471), (1, 29), (2, 56), (3, 206), (4, 37), (5, 42), (6, 102), (7, 19), (8, 18), (9, 82), (10, 115), (11, 31), (12, 3), (13, 72), (14, 53), (15, 15), (16, 41), (17, 3), (18, 1)]\n" + ] + } + ], + "source": [ + "for key in dictionary.keys()[0:5]:\n", + " print (key, dictionary[key])\n", + "\n", + "print(corpus[0])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 2.TF-IDF + Lsi主题模型" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "from gensim import models\n", + "tfidf_model = models.TfidfModel(corpus=corpus, id2word=dictionary, normalize=True) \n", + "# 将整个corpus转为tf-idf格式\n", + "corpus_tfidf = tfidf_model[corpus]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "## lsi 主题模型,作为特征向量\n", + "lsi_model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=200)\n", + "corpus_lsi = lsi_model[corpus_tfidf]\n", + "\n", + "# 提取数字,转化为numpy的矩阵\n", + "all_x = [[v for k,v in doc] for doc in corpus_lsi]" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[(0, '0.162*\"movie\" + 0.140*\"film\" + 0.102*\"-\" + 0.099*\"good\" + 0.099*\"like\" + 0.098*\"really\" + 0.098*\"bad\" + 0.092*\"one\" + 0.089*\"would\" + 0.088*\"story\"'), (1, '0.276*\"bad\" + 0.238*\"movie\" + 0.180*\"worst\" + -0.156*\"-\" + 0.153*\"movies\" + 0.113*\"waste\" + 0.108*\"ever\" + 0.106*\"acting\" + 0.106*\"terrible\" + -0.101*\"film\"'), (2, '-0.667*\"show\" + -0.212*\"episode\" + -0.203*\"series\" + 0.160*\"-\" + 0.153*\"film\" + -0.146*\"season\" + -0.145*\"episodes\" + -0.135*\"tv\" + -0.130*\"shows\" + -0.125*\"funny\"')]\n", + "(50000, 200)\n" + ] + } + ], + "source": [ + "# print(np.shape(corpus_lsi))\n", + "# (50000, 200, 2)\n", + "print(lsi_model.print_topics(3))\n", + "# print(corpus_lsi[0])\n", + "\n", + "import numpy as np\n", + "print(np.shape(all_x))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 3.Word2vec向量\n", + "\n", + "神经网络语言模型 L = SUM[log(p(w|contect(w))],即在w的上下文下计算当前词w的概率,由公式可以看到,我们的核心是计算p(w|contect(w), Word2vec给出了构造这个概率的一个方法。" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": {}, + "outputs": [], + "source": [ + "import re\n", + "from bs4 import BeautifulSoup\n", + "from nltk.corpus import stopwords\n", + "\n", + "# def show_diff(origin, html, text):\n", + "# print(origin)\n", + "# print(\"\\n-----------show diff-----------\\n\")\n", + "# print(html)\n", + "# print(\"\\n-----------show diff-----------\\n\")\n", + "# print(text)\n", + "\n", + "# origin = train['review'][0]\n", + "# html = BeautifulSoup(origin, \"html.parser\").get_text()\n", + "# text = re.sub('[^a-zA-Z]', ' ', html).strip()\n", + "# show_diff(origin, html, text)\n", + "\n", + "stopwords = set(stopwords.words(\"english\"))\n", + "\n", + "def review_to_sentence(review):\n", + " html = BeautifulSoup(review, \"html.parser\").get_text()\n", + " text = re.sub('[^a-zA-Z]', ' ', html).strip()\n", + " words = [word for word in text.lower().split() if word not in stopwords]\n", + " return words\n", + "\n", + "unlabeled_train = pd.read_csv(os.path.join(data_dir, 'unlabeledTrainData.tsv'), header=0, delimiter=\"\\t\", quoting=3 )\n", + "train_texts = pd.concat([train['review'], unlabeled_train['review']])\n", + "sentences = list(map(review_to_sentence, train_texts))" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(75000,)\n" + ] + } + ], + "source": [ + "print(np.shape(sentences))" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [], + "source": [ + "import time\n", + "from gensim.models import Word2Vec\n", + "# 模型参数\n", + "num_features = 784 # Word vector dimensionality(原来默认用300维,为了计算CNN, 设置 784维 = 28*28) \n", + "min_word_count = 10 # Minimum word count \n", + "num_workers = 4 # Number of threads to run in parallel\n", + "context = 10 # Context window size \n", + "downsampling = 1e-3 # Downsample setting for frequent words" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "训练模型中...\n", + "训练完成\n", + "CPU times: user 7min 17s, sys: 6.85 s, total: 7min 24s\n", + "Wall time: 3min 1s\n" + ] + } + ], + "source": [ + "%%time\n", + "# 训练模型\n", + "print(\"训练模型中...\")\n", + "# model = Word2Vec(sentences, workers=num_workers, \\\n", + "# size=num_features, min_count=min_word_count, \\\n", + "# window=5, sample=downsampling)\n", + "model = Word2Vec(sentences, size=num_features, window=5)\n", + "print(\"训练完成\")" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/jiangzl/.virtualenvs/python3.6/lib/python3.6/site-packages/ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `__getitem__` (Method will be removed in 4.0.0, use self.wv.__getitem__() instead).\n", + " \"\"\"Entry point for launching an IPython kernel.\n" + ] + }, + { + "data": { + "text/plain": [ + "(784,)" + ] + }, + "execution_count": 33, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "np.shape(model[\"film\"])" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'kitchen'" + ] + }, + "execution_count": 34, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.wv.doesnt_match(\"man woman child kitchen\".split())" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'berlin'" + ] + }, + "execution_count": 35, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.wv.doesnt_match(\"france england germany berlin\".split())" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'london'" + ] + }, + "execution_count": 36, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.wv.doesnt_match(\"paris berlin london austria\".split())" + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[('men', 0.5552874803543091),\n", + " ('lady', 0.5526503920555115),\n", + " ('woman', 0.49917668104171753),\n", + " ('mans', 0.47213518619537354),\n", + " ('guy', 0.4668915569782257)]" + ] + }, + "execution_count": 37, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.wv.most_similar(\"man\", topn=5)" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[('princess', 0.6967809200286865),\n", + " ('bride', 0.6197341084480286),\n", + " ('latifah', 0.6163896918296814),\n", + " ('goddess', 0.6069524884223938),\n", + " ('showgirl', 0.5752988457679749)]" + ] + }, + "execution_count": 38, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.wv.most_similar(\"queen\", topn=5)" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[('terrible', 0.8102608919143677),\n", + " ('horrible', 0.7840115427970886),\n", + " ('dreadful', 0.7728089690208435),\n", + " ('horrid', 0.7526298761367798),\n", + " ('atrocious', 0.7394574284553528)]" + ] + }, + "execution_count": 39, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.wv.most_similar(\"awful\", topn=5)" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[('princess', 0.4752192795276642)]" + ] + }, + "execution_count": 40, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model.wv.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": {}, + "outputs": [], + "source": [ + "def makeFeatureVec(words, model, num_features):\n", + " '''\n", + " 对段落中的所有词向量进行取平均操作\n", + " '''\n", + " featureVec = np.zeros((num_features,), dtype=\"float32\")\n", + " nwords = 0.\n", + "\n", + " # Index2word包含了词表中的所有词,为了检索速度,保存到set中\n", + " index2word_set = set(model.wv.index2word)\n", + " for word in words:\n", + " if word in index2word_set:\n", + " nwords = nwords + 1.\n", + " featureVec = np.add(featureVec, model[word])\n", + "\n", + " # 取平均\n", + " featureVec = np.divide(featureVec, nwords)\n", + " return featureVec\n", + "\n", + "\n", + "def getAvgFeatureVecs(reviews, model, num_features):\n", + " '''\n", + " 给定一个文本列表,每个文本由一个词列表组成,返回每个文本的词向量加和的平均值\n", + " '''\n", + " counter = 0\n", + " reviewFeatureVecs = np.zeros((len(reviews), num_features), dtype=\"float32\")\n", + "\n", + " for review in reviews:\n", + " if counter % 5000 == 0:\n", + " print(\"Review %d of %d\" % (counter, len(reviews)))\n", + "\n", + " reviewFeatureVecs[counter] = makeFeatureVec(review, model, num_features)\n", + " counter = counter + 1\n", + "\n", + " return reviewFeatureVecs" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "CPU times: user 6 µs, sys: 1e+03 ns, total: 7 µs\n", + "Wall time: 11.2 µs\n", + "Review 0 of 25000\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/jiangzl/.virtualenvs/python3.6/lib/python3.6/site-packages/ipykernel_launcher.py:13: DeprecationWarning: Call to deprecated `__getitem__` (Method will be removed in 4.0.0, use self.wv.__getitem__() instead).\n", + " del sys.path[0]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Review 5000 of 25000\n", + "Review 10000 of 25000\n", + "Review 15000 of 25000\n", + "Review 20000 of 25000\n", + "(25000, 784)\n" + ] + } + ], + "source": [ + "%time \n", + "trainDataVecs = getAvgFeatureVecs(texts[:len_train], model, num_features)\n", + "print(np.shape(trainDataVecs))" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "CPU times: user 6 µs, sys: 2 µs, total: 8 µs\n", + "Wall time: 97 µs\n", + "Review 0 of 25000\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/jiangzl/.virtualenvs/python3.6/lib/python3.6/site-packages/ipykernel_launcher.py:13: DeprecationWarning: Call to deprecated `__getitem__` (Method will be removed in 4.0.0, use self.wv.__getitem__() instead).\n", + " del sys.path[0]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Review 5000 of 25000\n", + "Review 10000 of 25000\n", + "Review 15000 of 25000\n", + "Review 20000 of 25000\n", + "(25000, 784)\n" + ] + } + ], + "source": [ + "%time \n", + "testDataVecs = getAvgFeatureVecs(texts[len_train:], model, num_features)\n", + "print(np.shape(testDataVecs))" + ] + }, + { + "cell_type": "code", + "execution_count": 92, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "高斯贝叶斯分类器 10折交叉验证得分: \n", + " [0.62715936 0.6181632 0.62577952 0.62458144 0.63289088 0.59956992\n", + " 0.61033216 0.62668192 0.610296 0.60734944]\n", + "\n", + "高斯贝叶斯分类器 10折交叉验证平均得分: \n", + " 0.618280384\n" + ] + } + ], + "source": [ + "from sklearn.naive_bayes import GaussianNB as GNB\n", + "from sklearn.cross_validation import cross_val_score\n", + "\n", + "gnb_model = GNB()\n", + "gnb_model.fit(trainDataVecs, label)\n", + "\n", + "scores = cross_val_score(gnb_model, trainDataVecs, label, cv=10, scoring='roc_auc')\n", + "print(\"\\n高斯贝叶斯分类器 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\n高斯贝叶斯分类器 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 机器学习 - 模型调参" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### KNN 模型训练" + ] + }, + { + "cell_type": "code", + "execution_count": 72, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "knn算法 10折交叉验证得分: \n", + " [0.82005056 0.81503776 0.83006976 0.8199152 0.82069568 0.827304\n", + " 0.81693088 0.8250944 0.80150176 0.821496 ]\n", + "\n", + "knn算法 10折交叉验证平均得分: \n", + " 0.8198095999999999\n" + ] + } + ], + "source": [ + "from sklearn.neighbors import KNeighborsClassifier\n", + "from sklearn.model_selection import cross_val_score\n", + "\n", + "knn_model = KNeighborsClassifier(n_neighbors=5)\n", + "knn_model.fit(all_x[:len_train], label)\n", + "\n", + "scores = cross_val_score(knn_model, all_x[:len_train], label, cv=10, scoring='roc_auc')\n", + "print(\"\\nknn算法 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\nknn算法 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 决策树 模型训练" + ] + }, + { + "cell_type": "code", + "execution_count": 73, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "决策树 10折交叉验证得分: \n", + " [0.7392 0.7232 0.7292 0.7236 0.7412 0.7164 0.718 0.724 0.7124 0.7156]\n", + "\n", + "决策树 10折交叉验证平均得分: \n", + " 0.72428\n" + ] + } + ], + "source": [ + "from sklearn.tree import DecisionTreeClassifier\n", + "from sklearn.model_selection import cross_val_score\n", + "\n", + "tree_model = DecisionTreeClassifier()\n", + "tree_model.fit(all_x[:len_train], label)\n", + "\n", + "scores = cross_val_score(tree_model, all_x[:len_train], label, cv=10, scoring='roc_auc')\n", + "print(\"\\n决策树 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\n决策树 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 逻辑回归 模型训练" + ] + }, + { + "cell_type": "code", + "execution_count": 99, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "逻辑回归 10折交叉验证得分: \n", + " [0.94440064 0.94031744 0.95128192 0.94374784 0.9410656 0.94308864\n", + " 0.94733184 0.948768 0.93660352 0.94612288]\n", + "\n", + "逻辑回归 10折交叉验证平均得分: \n", + " 0.944272832\n" + ] + } + ], + "source": [ + "from sklearn.linear_model import LogisticRegression\n", + "from sklearn.model_selection import cross_val_score\n", + "\n", + "lr_model = LogisticRegression(C=0.1, max_iter=100)\n", + "lr_model.fit(all_x[:len_train], label)\n", + "\n", + "scores = cross_val_score(lr_model, all_x[:len_train], label, cv=10, scoring='roc_auc')\n", + "print(\"\\n逻辑回归 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\n逻辑回归 10折交叉验证平均得分: \\n\", np.mean(scores))\n", + "\n", + "\n", + "# from sklearn.model_selection import GridSearchCV\n", + "\n", + "# # 设定grid search的参数\n", + "# grid_values = {'C': [1, 15, 30, 50]} \n", + "# \"\"\"\n", + "# penalty: l1 or l2, 用于指定惩罚中使用的标准。\n", + "# \"\"\"\n", + "# model_LR = GridSearchCV(estimator=LR(penalty='l2', dual=True, random_state=0), grid_values, scoring='roc_auc', cv=20)\n", + "# model_LR.fit(train_x, label)\n", + "\n", + "# 输出结果\n", + "# print(model_LR.cv_results_, '\\n', model_LR.best_params_, model_LR.best_score_)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### SVM 模型训练" + ] + }, + { + "cell_type": "code", + "execution_count": 75, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "SVM 10折交叉验证得分: \n", + " [0.94539328 0.9421344 0.95242176 0.94563328 0.94189504 0.94408704\n", + " 0.94839424 0.94898688 0.93809024 0.9473792 ]\n", + "\n", + "SVM 10折交叉验证平均得分: \n", + " 0.945441536\n" + ] + } + ], + "source": [ + "from sklearn.svm import SVC\n", + "from sklearn.model_selection import cross_val_score\n", + "\n", + "# model = SVC(C=4, kernel='rbf')\n", + "svm_model = SVC(kernel='linear', probability=True)\n", + "svm_model.fit(all_x[:len_train], label)\n", + "\n", + "scores = cross_val_score(svm_model, all_x[:len_train], label, cv=10, scoring='roc_auc')\n", + "print(\"\\nSVM 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\nSVM 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "code", + "execution_count": 152, + "metadata": {}, + "outputs": [], + "source": [ + "svm_model = SVC(kernel='linear', probability=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### XGBoost 模型训练" + ] + }, + { + "cell_type": "code", + "execution_count": 107, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "SVM 10折交叉验证得分: \n", + " [0.93396416 0.93105024 0.93927616 0.9332768 0.9338336 0.93682432\n", + " 0.93343296 0.93623296 0.92564352 0.93381504]\n", + "\n", + "SVM 10折交叉验证平均得分: \n", + " 0.933734976\n" + ] + } + ], + "source": [ + "from sklearn.model_selection import cross_val_score\n", + "from xgboost import XGBClassifier\n", + "import numpy as np\n", + "\n", + "xgb_model = XGBClassifier(n_estimators=150, min_samples_leaf=3, max_depth=6)\n", + "\"\"\"\n", + "AttributeError: 'list' object has no attribute 'shape'\n", + "list => np.array\n", + "\"\"\"\n", + "xgb_model.fit(np.array(all_x[:len_train]), label)\n", + "\n", + "scores = cross_val_score(xgb_model, np.array(all_x[:len_train]), label, cv=10, scoring='roc_auc')\n", + "print(\"\\nXGB 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\nXGB 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 模型融合" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### bagging: 随机森林 \n", + "\n", + "随机森林效果不好,去掉所有的树模型" + ] + }, + { + "cell_type": "code", + "execution_count": 108, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "随机森林 10折交叉验证得分: \n", + " [0.94539328 0.9421344 0.95242176 0.94563328 0.94189504 0.94408704\n", + " 0.94839424 0.94898688 0.93809024 0.9473792 ]\n", + "\n", + "随机森林 10折交叉验证平均得分: \n", + " 0.945441536\n" + ] + } + ], + "source": [ + "from sklearn.model_selection import GridSearchCV\n", + "from sklearn.ensemble import RandomForestClassifier\n", + "\n", + "# parameters= {'n_estimators': range(10, 101, 10)} \n", + "# gsearch_rf = GridSearchCV(\n", + "# estimator=RandomForestClassifier(max_depth=8, random_state=0),\n", + "# param_grid=parameters, scoring='roc_auc', cv=10)\n", + "\n", + "# gsearch_rf = gsearch_rf.fit(all_x[:len_train], label)\n", + "\n", + "# print(gsearch_rf.cv_results_, '\\n', gsearch_rf.best_params_, '\\t', gsearch_rf.best_score_)\n", + "\"\"\"\n", + "[mean: 0.87486, std: 0.00576, params: {'n_estimators': 10}, mean: 0.88505, std: 0.00611, params: {'n_estimators': 20}, mean: 0.89032, std: 0.00609, params: {'n_estimators': 30}, mean: 0.89246, std: 0.00537, params: {'n_estimators': 40}, mean: 0.89439, std: 0.00528, params: {'n_estimators': 50}, \n", + " mean: 0.89507, std: 0.00607, params: {'n_estimators': 60}, mean: 0.89591, std: 0.00618, params: {'n_estimators': 70}, mean: 0.89634, std: 0.00634, params: {'n_estimators': 80}, mean: 0.89671, std: 0.00607, params: {'n_estimators': 90}, mean: 0.89753, std: 0.00588, params: {'n_estimators': 100}] \n", + "\n", + "{'n_estimators': 100} 0.89753344\n", + "\"\"\"\n", + "\n", + "rf_model = RandomForestClassifier(n_estimators=100, max_depth=8, random_state=0)\n", + "rf_model.fit(all_x[:len_train], label)\n", + "\n", + "scores = cross_val_score(svm_model, all_x[:len_train], label, cv=10, scoring='roc_auc')\n", + "print(\"\\n随机森林 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\n随机森林 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### boosting: AdaBoost" + ] + }, + { + "cell_type": "code", + "execution_count": 89, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "AdaBoost 10折交叉验证得分: \n", + " [0.9207616 0.91974976 0.92787136 0.91906176 0.92305856 0.92228416\n", + " 0.9224832 0.92169856 0.91816256 0.92096768]\n", + "\n", + "AdaBoost 10折交叉验证平均得分: \n", + " 0.9216099200000001\n" + ] + } + ], + "source": [ + "from sklearn.ensemble import AdaBoostClassifier\n", + "from sklearn.tree import DecisionTreeClassifier\n", + "from sklearn.model_selection import cross_val_score\n", + "\n", + "ab_model = AdaBoostClassifier(\n", + " DecisionTreeClassifier(max_depth=2),\n", + " n_estimators=600,\n", + " learning_rate=1)\n", + "\n", + "ab_model.fit(all_x[:len_train], label)\n", + "\n", + "scores = cross_val_score(ab_model, all_x[:len_train], label, cv=10, scoring='roc_auc')\n", + "print(\"\\nAdaBoost 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\nAdaBoost 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### voting: 多模型投票" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": false + }, + "outputs": [], + "source": [ + "from sklearn.model_selection import cross_val_score\n", + "from sklearn.ensemble import VotingClassifier\n", + "\n", + "\"\"\"\n", + "soft报错是因为这种投票方式使用的是每个分类器的概率输出值进行投票的。\n", + "所以要求每个分类器的输出是概率值,而不是一个类别。\n", + "对于svc来说,默认的输出是类别,所以会有问题,其他分类器不会有这样的问题。\n", + "\"\"\"\n", + "\n", + "vot_model = VotingClassifier(\n", + "# estimators=[('lr', lr_model), ('svm', svm_model), ('xgb', xgb_model), ('rf', rf_model), ('ab', ab_model)]\n", + " estimators=[('xgb', xgb_model), ('rf', rf_model)],\n", + " voting='hard')\n", + "vot_model.fit(np.array(all_x[:len_train]), np.array(label))\n", + "\n", + "scores = cross_val_score(vot_model, np.array(all_x[:len_train]), np.array(label), cv=10, scoring='roc_auc')\n", + "print(\"\\nAdaBoost 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\nAdaBoost 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### stacking: 模型" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "y = np.array([0, 0, 1, 1])\n", + "skf = StratifiedKFold(y, 2)\n", + "len(skf)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "'''模型融合中使用到的各个单模型'''\n", + "import numpy as np\n", + "from sklearn.model_selection import cross_val_score\n", + "from sklearn.cross_validation import StratifiedKFold\n", + "\n", + "# 划分train数据集,调用代码,把数据集名字转成和代码一样\n", + "X = np.array(all_x[:len_train])\n", + "X_predict = np.array(all_x[len_train:])\n", + "label.astype(np.integer)\n", + "y = label.values\n", + "\n", + "# clfs = [LogisticRegression(C=0.1, max_iter=100),\n", + "# xgb.XGBClassifier(max_depth=6, n_estimators=100, num_round = 5),\n", + "# RandomForestClassifier(n_estimators=100, max_depth=6, oob_score=True),\n", + "# GradientBoostingClassifier(learning_rate=0.3, max_depth=6, n_estimators=100)]\n", + "\n", + "clfs = [knn_model, tree_model, lr_model, svm_model, xgb_model, rf_model, ab_model]\n", + "\n", + "\n", + "# 创建n_folds\n", + "n_folds = 10\n", + "skf = StratifiedKFold(y, n_folds)\n", + "\n", + "\n", + "# 创建零矩阵\n", + "dataset_blend_train = np.zeros((X.shape[0], len(clfs)))\n", + "dataset_blend_test = np.zeros((X_predict.shape[0], len(clfs)))\n", + "\n", + "# 建立模型\n", + "for j, clf in enumerate(clfs):\n", + " '''依次训练各个单模型'''\n", + " # print(j, clf)\n", + " dataset_blend_test_j = np.zeros((X_predict.shape[0], len(skf)))\n", + " for i, (train, test) in enumerate(skf):\n", + " '''使用第i个部分作为预测,剩余的部分来训练模型,获得其预测的输出作为第i部分的新特征。'''\n", + " # print(\"Fold\", i)\n", + " X_train, y_train, X_test, y_test = X[train], y[train], X[test], y[test]\n", + " clf.fit(X_train, y_train)\n", + " y_submission = clf.predict_proba(X_test)[:, 1]\n", + " dataset_blend_train[test, j] = y_submission\n", + " dataset_blend_test_j[:, i] = clf.predict_proba(X_predict)[:, 1]\n", + " '''对于测试集,直接用这k个模型的预测值均值作为新的特征。'''\n", + " dataset_blend_test[:, j] = dataset_blend_test_j.mean(1)\n", + "\n", + "# 用建立第二层模型\n", + "stacking_model = LogisticRegression(C=0.1, max_iter=100)\n", + "stacking_model.fit(dataset_blend_train, y_train)\n", + "\n", + "scores = cross_val_score(ab_model, dataset_blend_train, label, cv=10, scoring='roc_auc')\n", + "print(\"\\nAdaBoost 10折交叉验证得分: \\n\", scores)\n", + "print(\"\\nAdaBoost 10折交叉验证平均得分: \\n\", np.mean(scores))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 数据导出" + ] + }, + { + "cell_type": "code", + "execution_count": 70, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "保存结果...\n", + " id sentiment\n", + "0 \"12311_10\" 1\n", + "1 \"8348_2\" 0\n", + "2 \"5828_4\" 1\n", + "3 \"7186_2\" 1\n", + "4 \"12128_7\" 1\n", + "5 \"2913_8\" 1\n", + "6 \"4396_1\" 0\n", + "7 \"395_2\" 0\n", + "8 \"10616_1\" 0\n", + "9 \"9074_9\" 1\n", + "结束.\n" + ] + }, + { + "data": { + "text/plain": [ + "'\\n1.提交最终的结果到kaggle,AUC为:0.85728,排名300左右,50%的水平\\n2. ngram_range = 3, 三元文法,AUC为0.85924\\n'" + ] + }, + "execution_count": 70, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "test_predicted = np.array(model_NB.predict(corpus_tfidf[len_train:]))\n", + "print('保存结果...')\n", + "\n", + "import os\n", + "root_dir = \"/opt/data/kaggle/getting-started/word2vec-nlp-tutorial\"\n", + " \n", + "submission_df = pd.DataFrame(data ={'id': test['id'], 'sentiment': test_predicted})\n", + "print(submission_df.head(10))\n", + "submission_df.to_csv(os.path.join(root_dir, 'submission_br.csv'), index = False)\n", + "\n", + "print('结束.')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## CNN 来处理 文本问题: https://zhuanlan.zhihu.com/p/26729228" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 分词,获取分割后的所有文章" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "import re\n", + "from bs4 import BeautifulSoup\n", + "from nltk.corpus import stopwords\n", + "\n", + "stopwords = set(stopwords.words(\"english\"))\n", + "\n", + "def review_to_sentence(review):\n", + " html = BeautifulSoup(review, \"html.parser\").get_text()\n", + " text = re.sub('[^a-zA-Z]', ' ', html).strip()\n", + " words = [word for word in text.lower().split() if word not in stopwords]\n", + " return words\n", + "\n", + "unlabeled_train = pd.read_csv(os.path.join(data_dir, 'unlabeledTrainData.tsv'), header=0, delimiter=\"\\t\", quoting=3 )\n", + "train_texts = pd.concat([train['review'], unlabeled_train['review']], axis=0, ignore_index=True)\n", + "sentences = list(map(review_to_sentence, train_texts))" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "((75000,), 219, 84, 25000, 50000)" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "np.shape(sentences), len(sentences[0]), len(sentences[1]), len(train['review']), len( unlabeled_train['review'])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 对文章简历词典" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "from gensim import corpora\n", + "dictionary = corpora.Dictionary(sentences)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "dictionary.add_documents([[\" \"]])" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0 actual\n", + "1 alone\n", + "2 also\n", + "3 another\n", + "4 anyway\n" + ] + } + ], + "source": [ + "for key in dictionary.keys()[0:5]:\n", + " print (key, dictionary[key])\n" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(123350, 123350, dict, dict)" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "len(dictionary.token2id), len(dictionary.id2token), type(dictionary.token2id), type(dictionary.id2token)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(3, 'another', 123349)" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "dictionary.token2id[\"another\"], dictionary.id2token[3], dictionary.token2id[\" \"]" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import torch.nn as nn\n", + "import torch.nn.functional as F # 激励函数都在这\n", + "from torch.autograd import Variable\n", + "import torch.utils.data as Data\n", + "import torchvision # 数据库模块" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "1416 \n", + " 1416\n" + ] + } + ], + "source": [ + "max_len = max([len(i) for i in sentences])\n", + "max_index = dictionary.token2id[\" \"]\n", + "max_list = [max_index for x in range(max_len)]\n", + "print(max_len, \"\\n\", len(max_list))" + ] + }, + { + "cell_type": "code", + "execution_count": 362, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Variable containing:\n", + "1.00000e-02 *\n", + " 5.6823 -5.1981\n", + "[torch.FloatTensor of size 1x2]\n", + "\n" + ] + } + ], + "source": [ + "# # prepare_sequence 是将文本的索引转化为 Variable 对象\n", + "# def prepare_sequence(seq):\n", + "# idxs = [dictionary.token2id[w] for w in seq]\n", + "# if len(idxs) < max_len:\n", + "# idxs = idxs + max_list[len(idxs):]\n", + "# # print('文本词典的索引序列:', idxs)\n", + "# tensor = torch.LongTensor(idxs)\n", + "# return Variable(tensor)\n", + "\n", + "# sentence_in = prepare_sequence(sentences[1383])\n", + "# # word_embeddings = nn.Embedding(len(dictionary.token2id), 5)\n", + "# # word_embeddings(sentence_in)\n", + "# x = cnn(sentence_in)\n", + "# print(x)" + ] + }, + { + "cell_type": "code", + "execution_count": 319, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "1416" + ] + }, + "execution_count": 319, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# sentence_in.data.size(0)" + ] + }, + { + "cell_type": "code", + "execution_count": 320, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(['stuff', 'going', 'moment'], '\\n\\n', 1416)" + ] + }, + "execution_count": 320, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# sentences[0][:3], \"\\n\\n\", len(sentence_in)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 拆分数据集" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [], + "source": [ + "# prepare_sequence 是将文本的索引转化为 Variable 对象\n", + "def prepare_sequence(seq):\n", + " idxs = [dictionary.token2id[w] for w in seq]\n", + " if len(idxs) < max_len:\n", + " idxs = idxs + max_list[len(idxs):]\n", + "# print('文本词典的索引序列:', idxs)\n", + " return idxs\n", + "\n", + "\n", + "from sklearn.model_selection import train_test_split\n", + "\n", + "X_train = list(map(prepare_sequence, sentences[:len(train)]))\n", + "X_train_d, X_test_d, y_train_d, y_test_d = train_test_split(X_train, label.tolist(), test_size=0.2, shuffle=True, random_state=42)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "(list, list)" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "type(X_train_d), type(y_train_d)" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "((20000, 1416), (5000, 1416), (20000,), (5000,))" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "np.shape(X_train_d), np.shape(X_test_d), np.shape(y_train_d), np.shape(y_test_d)" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "([100, 1380, 2330], [509, 58, 14209], [0, 0, 0], [0, 1, 0])" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X_train_d[0][:3], X_test_d[0][:3], y_train_d[:3], y_test_d[:3]" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [], + "source": [ + "class textCNN(nn.Module):\n", + " \n", + " def __init__(self, vocab_size, embedding_dim, max_len, n_classes):\n", + " super(textCNN, self).__init__()\n", + " \n", + " self.model_name = 'alexnet'\n", + " self.vocab_size = vocab_size\n", + " self.embedding_dim = embedding_dim\n", + " self.max_len = max_len\n", + " \n", + " self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)\n", + " \n", + " self.features = nn.Sequential(\n", + "# nn.Conv2d(1, 64, kernel_size=11, stride=4, padding=2),\n", + "# nn.ReLU(inplace=True),\n", + "# nn.MaxPool2d(kernel_size=3, stride=2),\n", + "\n", + "# nn.Conv2d(64, 192, kernel_size=5, padding=2),\n", + "# nn.ReLU(inplace=True),\n", + "# nn.MaxPool2d(kernel_size=3, stride=2),\n", + "\n", + "# nn.Conv2d(192, 384, kernel_size=3, padding=1),\n", + "# nn.ReLU(inplace=True),\n", + "\n", + "# nn.Conv2d(384, 256, kernel_size=3, padding=1),\n", + "# nn.ReLU(inplace=True),\n", + "\n", + "# nn.Conv2d(256, 256, kernel_size=3, padding=1),\n", + "# nn.ReLU(inplace=True),\n", + "# nn.MaxPool2d(kernel_size=3, stride=2),\n", + " \n", + " \n", + " nn.Conv2d(1, 16, 5, 1, 2),\n", + " nn.ReLU(), # activation\n", + " nn.MaxPool2d(kernel_size=2, stride=2), # 在 2x2 空间里向下采样, output shape (16, 14, 14), 默认步长为2\n", + " \n", + " nn.Conv2d(16, 32, 5, 1, 2), # output shape (32, 14, 14)\n", + " nn.ReLU(), # activation\n", + " nn.MaxPool2d(2), # output shape (32, 7, 7)\n", + "\n", + " nn.Conv2d(32, 64, 5, 1, 2), \n", + " nn.ReLU(), \n", + " nn.MaxPool2d(2), \n", + " \n", + " nn.Conv2d(64, 128, 5, 1, 2), \n", + " nn.ReLU(), \n", + " nn.MaxPool2d(2)\n", + " )\n", + " \n", + " self.classifier = nn.Sequential(\n", + "# nn.Dropout(),\n", + "# nn.Linear(256 * 6 * 6, 4096),\n", + "# nn.ReLU(inplace=True),\n", + "\n", + "# nn.Dropout(),\n", + "# nn.Linear(4096, 4096),\n", + "# nn.ReLU(inplace=True),\n", + "\n", + "# nn.Linear(4096, n_classes),\n", + " \n", + " nn.Dropout(),\n", + " nn.Linear(45056, n_classes), \n", + " )\n", + " \n", + " def forward(self, x):\n", + " x = self.word_embeddings(x)\n", + " x = x.view(len(x), 1, self.max_len, self.embedding_dim)\n", + " x = self.features(x)\n", + " x = x.view(x.size(0), -1)\n", + " output = self.classifier(x)\n", + " return output # return x for visualization\n" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "textCNN(\n", + " (word_embeddings): Embedding(123350, 64)\n", + " (features): Sequential(\n", + " (0): Conv2d(1, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n", + " (1): ReLU()\n", + " (2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)\n", + " (3): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n", + " (4): ReLU()\n", + " (5): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)\n", + " (6): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n", + " (7): ReLU()\n", + " (8): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)\n", + " (9): Conv2d(64, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n", + " (10): ReLU()\n", + " (11): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1), ceil_mode=False)\n", + " )\n", + " (classifier): Sequential(\n", + " (0): Dropout(p=0.5)\n", + " (1): Linear(in_features=45056, out_features=2, bias=True)\n", + " )\n", + ")\n" + ] + } + ], + "source": [ + "cnn = textCNN(len(dictionary.token2id), 64, max_len, 2)\n", + "print(cnn) # net architecture\n", + "\n", + "# optimizer = torch.optim.SGD(cnn.parameters(), lr=0.02) # 传入 net 的所有参数, 学习率\n", + "# lr 优化步长\n", + "# weight_decay(权重衰减): 也叫 L2 regularization (1e-5就是 1*(10的-5次方)即0.00001)\n", + "optimizer = torch.optim.Adam(cnn.parameters(), lr=1e-5, weight_decay=1e-7)\n", + "# 算误差的时候, 注意真实值!不是! one-hot 形式的, 而是1D Tensor, (batch,)\n", + "# 但是预测值是2D tensor (batch, n_classes)\n", + "loss_func = nn.CrossEntropyLoss()" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [], + "source": [ + "import time\n", + "import math\n", + "\n", + "def timeSince(since):\n", + " now = time.time()\n", + " s = now - since\n", + " m = math.floor(s / 60)\n", + " s -= m * 60\n", + " return '%dm %ds' % (m, s)\n", + "\n", + "start = time.time()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": {}, + "outputs": [], + "source": [ + "tr_x = torch.LongTensor(X_train_d)\n", + "tr_y = torch.LongTensor(y_train_d)\n", + "te_x = torch.LongTensor(X_test_d)\n", + "te_y = torch.LongTensor(y_test_d)\n", + "\n", + "torch_train_dataset = Data.TensorDataset(data_tensor=tr_x, target_tensor=tr_y)\n", + "torch_test_dataset = Data.TensorDataset(data_tensor=te_x, target_tensor=te_y)\n", + "\n", + "BATCH_SIZE = 20 # 批训练的数据个数\n", + "\n", + "# 把 dataset 放入 DataLoader\n", + "train_loader = Data.DataLoader(\n", + " dataset=torch_train_dataset, # torch TensorDataset format\n", + " batch_size=BATCH_SIZE, # 每个 batch 加载多少个样本\n", + " shuffle=True, # 要不要打乱数据 (打乱比较好)\n", + " num_workers=2, # 多线程来读数据\n", + ")\n", + "test_loader = Data.DataLoader(\n", + " dataset=torch_test_dataset, # torch TensorDataset format\n", + " batch_size=BATCH_SIZE, # 每个 batch 加载多少个样本\n", + " shuffle=True, # 要不要打乱数据 (打乱比较好)\n", + " num_workers=2, # 多线程来读数据\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 1 1 1 1]\n", + "target_y:\t [0 1 1 1 0 1 0 0 1 0 0 1 1 1 1 0 1 0 1 0]\n", + "0-19 7.60% (101m 42s) logloss=12.09 \t accuracy=0.65 \t loss=0.6844480037689209\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [1 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 1 1 1 0]\n", + "0-39 15.60% (103m 24s) logloss=15.54 \t accuracy=0.55 \t loss=0.6886069774627686\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 1 1 1 1 0 1 0 1 0 1 1 1 1 0 1 1 0 0 1]\n", + "0-59 23.60% (106m 31s) logloss=12.09 \t accuracy=0.65 \t loss=0.6793051958084106\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 0 0 0 1 1 0 0 0 0 1 0 1 1 0 0 1 0 0 1]\n", + "0-79 31.60% (114m 37s) logloss=22.45 \t accuracy=0.35 \t loss=0.6999103426933289\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 0 0 0 0 1 1 0 1 0 1 1 0 1 1 0 0 0 1 0]\n", + "0-99 39.60% (120m 29s) logloss=20.72 \t accuracy=0.40 \t loss=0.7069584131240845\n", + "pred_y:\t [1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 1 1 1 0 1]\n", + "0-119 47.60% (125m 5s) logloss=19.00 \t accuracy=0.45 \t loss=0.7006260752677917\n", + "pred_y:\t [0 0 0 1 1 0 0 0 0 0 1 1 0 1 1 0 0 0 0 1]\n", + "target_y:\t [0 1 1 0 0 1 0 0 1 1 0 0 0 0 1 1 1 1 1 0]\n", + "0-139 55.60% (132m 49s) logloss=25.90 \t accuracy=0.25 \t loss=0.7005825638771057\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 1 1 1 1 0 1 0 1 1 1 1 1 0 1 0 1 1 0]\n", + "0-159 63.60% (140m 33s) logloss=24.18 \t accuracy=0.30 \t loss=0.699988067150116\n", + "pred_y:\t [1 0 1 0 0 0 1 1 0 0 1 0 0 0 1 0 0 0 0 0]\n", + "target_y:\t [1 1 1 1 0 1 0 0 0 0 1 0 0 0 1 1 1 0 1 1]\n", + "0-179 71.60% (148m 14s) logloss=15.54 \t accuracy=0.55 \t loss=0.689541220664978\n", + "pred_y:\t [1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 0 1 0 1 1]\n", + "target_y:\t [0 1 0 0 1 0 0 0 0 1 1 1 0 0 1 1 1 1 0 1]\n", + "0-199 79.60% (154m 37s) logloss=19.00 \t accuracy=0.45 \t loss=0.6907564401626587\n", + "pred_y:\t [1 1 0 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1]\n", + "target_y:\t [0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 0 1 1 0 0]\n", + "0-219 87.60% (160m 44s) logloss=24.18 \t accuracy=0.30 \t loss=0.6987239122390747\n", + "pred_y:\t [1 0 1 1 0 1 1 1 1 1 0 1 1 1 0 0 0 1 1 1]\n", + "target_y:\t [0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 1 0 0 1 1]\n", + "0-239 95.60% (167m 26s) logloss=19.00 \t accuracy=0.45 \t loss=0.691254734992981\n", + "pred_y:\t [0 1 1 0 1 0 0 1 1 0 1 0 1 0 1 0 1 1 1 1]\n", + "target_y:\t [1 0 1 0 1 1 0 1 1 0 1 1 1 0 1 1 1 0 0 0]\n", + "0-259 103.60% (174m 38s) logloss=13.82 \t accuracy=0.60 \t loss=0.6896542310714722\n", + "pred_y:\t [0 1 1 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0]\n", + "target_y:\t [1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 1]\n", + "0-279 111.60% (181m 56s) logloss=12.09 \t accuracy=0.65 \t loss=0.6853312253952026\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0]\n", + "target_y:\t [1 1 1 1 1 0 1 1 0 1 1 1 1 1 0 0 0 1 1 1]\n", + "0-299 119.60% (188m 49s) logloss=25.90 \t accuracy=0.25 \t loss=0.7102211713790894\n", + "pred_y:\t [0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 0 0 1 1 1]\n", + "0-319 127.60% (196m 8s) logloss=12.09 \t accuracy=0.65 \t loss=0.6825012564659119\n", + "pred_y:\t [0 1 0 0 0 0 1 0 1 1 0 0 0 0 0 0 1 1 0 0]\n", + "target_y:\t [0 0 0 1 0 1 1 0 0 1 1 0 1 1 0 0 0 0 1 0]\n", + "0-339 135.60% (203m 21s) logloss=17.27 \t accuracy=0.50 \t loss=0.693179190158844\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 0]\n", + "target_y:\t [0 1 0 1 1 0 0 1 0 1 0 0 0 0 0 1 1 0 0 1]\n", + "0-359 143.60% (210m 48s) logloss=13.82 \t accuracy=0.60 \t loss=0.6911899447441101\n", + "pred_y:\t [0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 1 0 0 1 0 0 0 1 1 1 0 0 0 0 0 1 0 1]\n", + "0-379 151.60% (218m 11s) logloss=10.36 \t accuracy=0.70 \t loss=0.6836111545562744\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 1 1 0 0 0 1 0 0 1 1 0 1 0 0 1 1 1 0]\n", + "0-399 159.60% (225m 11s) logloss=17.27 \t accuracy=0.50 \t loss=0.6962472200393677\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0]\n", + "target_y:\t [0 1 1 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 1 1]\n", + "0-419 167.60% (232m 36s) logloss=17.27 \t accuracy=0.50 \t loss=0.6901986598968506\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1]\n", + "target_y:\t [1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1]\n", + "0-439 175.60% (240m 1s) logloss=8.63 \t accuracy=0.75 \t loss=0.6787502765655518\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 0 1 1 0 0 0 0 0 1 0 1 1 0 0 0 1 0 1]\n", + "0-459 183.60% (247m 31s) logloss=13.82 \t accuracy=0.60 \t loss=0.694130003452301\n", + "pred_y:\t [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 0 0 0 0]\n", + "0-479 191.60% (254m 5s) logloss=15.54 \t accuracy=0.55 \t loss=0.6915131211280823\n", + "pred_y:\t [0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0]\n", + "target_y:\t [0 0 1 1 1 0 0 0 1 1 1 1 1 1 0 1 0 1 0 1]\n", + "0-499 199.60% (260m 32s) logloss=20.72 \t accuracy=0.40 \t loss=0.696361243724823\n", + "pred_y:\t [0 0 0 0 1 1 0 0 0 1 0 0 1 0 0 1 1 1 0 1]\n", + "target_y:\t [0 0 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 0 1 1]\n", + "0-519 207.60% (267m 39s) logloss=15.54 \t accuracy=0.55 \t loss=0.6885088086128235\n", + "pred_y:\t [0 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1]\n", + "target_y:\t [1 0 0 0 1 0 1 1 1 0 1 0 0 0 0 1 1 0 0 0]\n", + "0-539 215.60% (275m 2s) logloss=19.00 \t accuracy=0.45 \t loss=0.6945030093193054\n", + "pred_y:\t [1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 0]\n", + "target_y:\t [1 0 0 0 1 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0]\n", + "0-559 223.60% (281m 12s) logloss=13.82 \t accuracy=0.60 \t loss=0.6861685514450073\n", + "pred_y:\t [1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1]\n", + "target_y:\t [0 1 1 1 1 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0]\n", + "0-579 231.60% (287m 12s) logloss=22.45 \t accuracy=0.35 \t loss=0.6945004463195801\n", + "pred_y:\t [0 1 0 0 1 1 1 1 0 1 1 1 0 0 0 1 0 1 0 1]\n", + "target_y:\t [1 0 1 0 1 0 1 0 0 0 0 0 1 0 0 0 1 1 0 1]\n", + "0-599 239.60% (293m 44s) logloss=19.00 \t accuracy=0.45 \t loss=0.6927602887153625\n", + "pred_y:\t [1 1 1 1 1 0 1 1 1 1 1 0 0 0 1 0 1 0 1 0]\n", + "target_y:\t [0 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1]\n", + "0-619 247.60% (300m 38s) logloss=12.09 \t accuracy=0.65 \t loss=0.6841057538986206\n", + "pred_y:\t [1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1]\n", + "target_y:\t [1 1 0 1 0 1 1 1 1 0 0 1 0 0 0 1 0 1 0 0]\n", + "0-639 255.60% (307m 59s) logloss=17.27 \t accuracy=0.50 \t loss=0.6867891550064087\n", + "pred_y:\t [1 0 0 1 0 0 0 0 1 1 1 1 0 0 0 1 1 0 1 0]\n", + "target_y:\t [1 0 1 0 0 1 1 1 0 1 0 0 1 0 0 0 1 1 0 0]\n", + "0-659 263.60% (315m 15s) logloss=20.72 \t accuracy=0.40 \t loss=0.697279691696167\n", + "pred_y:\t [0 1 1 1 0 1 1 0 1 1 1 1 1 1 0 1 1 1 0 0]\n", + "target_y:\t [1 0 0 0 1 0 0 0 1 0 0 0 1 1 1 0 0 1 1 0]\n", + "0-679 271.60% (322m 12s) logloss=24.18 \t accuracy=0.30 \t loss=0.6973448991775513\n", + "pred_y:\t [1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1]\n", + "target_y:\t [1 1 1 1 0 1 1 0 0 0 1 1 1 1 1 0 0 0 0 0]\n", + "0-699 279.60% (329m 20s) logloss=17.27 \t accuracy=0.50 \t loss=0.693761944770813\n", + "pred_y:\t [0 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 0]\n", + "target_y:\t [1 0 1 1 0 1 0 1 0 1 1 0 0 0 1 0 1 1 1 1]\n", + "0-719 287.60% (335m 57s) logloss=17.27 \t accuracy=0.50 \t loss=0.6938427686691284\n", + "pred_y:\t [0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0]\n", + "target_y:\t [1 0 1 0 0 1 1 0 0 1 1 0 0 1 0 0 1 0 1 0]\n", + "0-739 295.60% (342m 53s) logloss=17.27 \t accuracy=0.50 \t loss=0.6820307970046997\n", + "pred_y:\t [0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 1 1 1 1 1 1 0 0 1 0 1 0 0 1 1 0 0 0]\n", + "0-759 303.60% (350m 7s) logloss=15.54 \t accuracy=0.55 \t loss=0.6916964650154114\n", + "pred_y:\t [0 1 0 1 0 0 0 1 0 1 1 1 0 0 0 0 1 0 0 0]\n", + "target_y:\t [1 1 1 0 0 1 1 1 1 0 1 1 1 1 1 1 0 0 0 0]\n", + "0-779 311.60% (357m 5s) logloss=20.72 \t accuracy=0.40 \t loss=0.6954394578933716\n", + "pred_y:\t [0 1 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 0 1 0]\n", + "target_y:\t [0 0 1 1 1 1 0 0 0 0 1 0 1 1 0 1 1 0 0 0]\n", + "0-799 319.60% (364m 23s) logloss=20.72 \t accuracy=0.40 \t loss=0.7011473178863525\n", + "pred_y:\t [0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 1 1 0 1 0]\n", + "target_y:\t [0 1 1 1 1 0 0 1 1 0 1 0 0 0 0 1 1 1 0 1]\n", + "0-819 327.60% (371m 51s) logloss=19.00 \t accuracy=0.45 \t loss=0.6923590898513794\n", + "pred_y:\t [0 1 1 0 1 0 1 1 1 0 1 1 0 0 1 1 0 1 1 1]\n", + "target_y:\t [0 0 1 1 0 0 0 1 1 1 0 0 0 1 0 0 1 0 1 1]\n", + "0-839 335.60% (379m 18s) logloss=20.72 \t accuracy=0.40 \t loss=0.7093911170959473\n", + "pred_y:\t [1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 0 1 1 1 0]\n", + "target_y:\t [0 0 0 0 0 1 1 1 0 0 0 1 1 1 1 1 0 1 0 1]\n", + "0-859 343.60% (386m 37s) logloss=17.27 \t accuracy=0.50 \t loss=0.6935914158821106\n", + "pred_y:\t [1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 0 1 1 0 1]\n", + "target_y:\t [1 1 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0]\n", + "0-879 351.60% (394m 6s) logloss=17.27 \t accuracy=0.50 \t loss=0.6971246004104614\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1]\n", + "target_y:\t [1 1 0 1 0 1 1 1 0 0 1 0 0 0 0 1 0 1 0 1]\n", + "0-899 359.60% (401m 57s) logloss=13.82 \t accuracy=0.60 \t loss=0.7006368637084961\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "pred_y:\t [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0]\n", + "target_y:\t [1 0 0 0 0 0 1 1 0 1 1 1 0 0 1 0 0 0 0 1]\n", + "0-919 367.60% (409m 9s) logloss=13.82 \t accuracy=0.60 \t loss=0.684188961982727\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 0 1 1 1 0 0 1 1 0 0 0 1 1 1 0 1 0 0 0]\n", + "0-939 375.60% (416m 38s) logloss=17.27 \t accuracy=0.50 \t loss=0.7021096348762512\n", + "pred_y:\t [0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 0 0 1 1 0 1 0 0 1 1 1 1 1 1 1 0 0 1]\n", + "0-959 383.60% (424m 14s) logloss=20.72 \t accuracy=0.40 \t loss=0.6950109004974365\n", + "pred_y:\t [0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 1 0 0]\n", + "0-979 391.60% (431m 21s) logloss=17.27 \t accuracy=0.50 \t loss=0.6897827982902527\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 1 1 1]\n", + "target_y:\t [0 0 0 1 1 1 1 0 1 1 1 0 0 1 0 1 0 1 1 0]\n", + "0-999 399.60% (438m 31s) logloss=15.54 \t accuracy=0.55 \t loss=0.6863324642181396\n", + "pred_y:\t [0 0 0 1 0 0 0 1 0 1 1 0 0 1 0 0 0 0 0 0]\n", + "target_y:\t [0 1 0 1 1 0 1 1 0 0 0 1 1 1 0 1 1 1 0 0]\n", + "1-69 27.60% (447m 6s) logloss=17.27 \t accuracy=0.50 \t loss=0.6887694597244263\n", + "pred_y:\t [1 0 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 1 0 0]\n", + "target_y:\t [1 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0]\n", + "1-89 35.60% (455m 24s) logloss=6.91 \t accuracy=0.80 \t loss=0.6829289197921753\n", + "pred_y:\t [0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0]\n", + "target_y:\t [0 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1]\n", + "1-109 43.60% (463m 55s) logloss=13.82 \t accuracy=0.60 \t loss=0.6840181350708008\n", + "pred_y:\t [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0]\n", + "target_y:\t [0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1]\n", + "1-129 51.60% (472m 25s) logloss=3.45 \t accuracy=0.90 \t loss=0.6634591817855835\n", + "pred_y:\t [0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 0 1 1 1]\n", + "1-149 59.60% (481m 20s) logloss=15.54 \t accuracy=0.55 \t loss=0.6897100210189819\n", + "pred_y:\t [0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 0 0 1 0 0 0 1 0 0 1 1 0 0 1 0 1 0 0 0]\n", + "1-169 67.60% (490m 7s) logloss=12.09 \t accuracy=0.65 \t loss=0.6846562623977661\n", + "pred_y:\t [0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0]\n", + "target_y:\t [1 1 0 1 1 0 1 1 1 0 0 1 1 0 1 0 0 0 0 1]\n", + "1-189 75.60% (498m 56s) logloss=12.09 \t accuracy=0.65 \t loss=0.6853525638580322\n", + "pred_y:\t [0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0]\n", + "target_y:\t [1 0 1 1 1 0 1 1 0 0 0 1 0 0 0 1 0 0 0 1]\n", + "1-209 83.60% (507m 31s) logloss=19.00 \t accuracy=0.45 \t loss=0.7002253532409668\n", + "pred_y:\t [1 0 1 0 1 0 0 1 0 1 1 0 1 1 1 0 1 0 1 0]\n", + "target_y:\t [1 1 0 1 0 0 0 0 1 1 1 0 0 0 1 0 1 0 1 0]\n", + "1-229 91.60% (516m 28s) logloss=13.82 \t accuracy=0.60 \t loss=0.6863614320755005\n", + "pred_y:\t [0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 1 0 0 1 0 0 1 0 1 0 1 0 1 0 0 1 0 1 1]\n", + "1-249 99.60% (522m 43s) logloss=19.00 \t accuracy=0.45 \t loss=0.6914201974868774\n", + "pred_y:\t [0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 1]\n", + "target_y:\t [1 0 0 0 0 0 0 0 1 1 0 0 0 1 0 1 1 1 0 1]\n", + "1-269 107.60% (524m 30s) logloss=17.27 \t accuracy=0.50 \t loss=0.6946980953216553\n", + "pred_y:\t [1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1]\n", + "target_y:\t [0 0 0 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 0 1]\n", + "1-289 115.60% (526m 16s) logloss=19.00 \t accuracy=0.45 \t loss=0.7042781114578247\n", + "pred_y:\t [0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 0 0 1]\n", + "target_y:\t [1 0 1 0 0 0 1 0 0 0 1 1 1 1 1 1 1 1 1 0]\n", + "1-309 123.60% (530m 35s) logloss=19.00 \t accuracy=0.45 \t loss=0.6874731183052063\n", + "pred_y:\t [1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1]\n", + "target_y:\t [1 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 1 0 0 1]\n", + "1-329 131.60% (538m 20s) logloss=12.09 \t accuracy=0.65 \t loss=0.6883570551872253\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [1 0 1 1 0 1 0 1 0 0 0 1 1 0 0 1 0 0 1 0]\n", + "1-349 139.60% (546m 2s) logloss=19.00 \t accuracy=0.45 \t loss=0.6977750062942505\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1]\n", + "target_y:\t [0 0 0 1 0 1 1 0 0 1 1 1 0 0 1 0 0 0 0 0]\n", + "1-369 147.60% (553m 42s) logloss=17.27 \t accuracy=0.50 \t loss=0.6937034130096436\n", + "pred_y:\t [0 0 1 0 0 1 1 0 1 0 1 0 0 1 0 0 1 1 0 1]\n", + "target_y:\t [1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 0 0 1]\n", + "1-389 155.60% (561m 27s) logloss=13.82 \t accuracy=0.60 \t loss=0.691476583480835\n", + "pred_y:\t [0 0 0 1 0 1 0 1 1 0 1 1 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 1]\n", + "1-409 163.60% (564m 49s) logloss=13.82 \t accuracy=0.60 \t loss=0.6903079152107239\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0]\n", + "target_y:\t [0 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 0 1 1 0]\n", + "1-429 171.60% (566m 34s) logloss=12.09 \t accuracy=0.65 \t loss=0.6823988556861877\n", + "pred_y:\t [0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0]\n", + "target_y:\t [0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 1 1 1]\n", + "1-449 179.60% (568m 21s) logloss=10.36 \t accuracy=0.70 \t loss=0.6680271029472351\n", + "pred_y:\t [1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 0 0 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 1]\n", + "1-469 187.60% (575m 33s) logloss=24.18 \t accuracy=0.30 \t loss=0.708787739276886\n", + "pred_y:\t [1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1]\n", + "target_y:\t [1 0 0 1 0 1 0 1 0 1 0 1 0 0 0 0 1 0 1 0]\n", + "1-489 195.60% (583m 20s) logloss=15.54 \t accuracy=0.55 \t loss=0.6908186674118042\n", + "pred_y:\t [0 0 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 0 0 0]\n", + "target_y:\t [1 0 0 1 1 0 1 1 1 0 0 0 0 0 1 0 0 1 0 0]\n", + "1-509 203.60% (591m 14s) logloss=15.54 \t accuracy=0.55 \t loss=0.6828701496124268\n", + "pred_y:\t [0 0 1 0 0 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1]\n", + "target_y:\t [1 0 0 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0]\n", + "1-529 211.60% (598m 51s) logloss=22.45 \t accuracy=0.35 \t loss=0.7000082731246948\n", + "pred_y:\t [1 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0]\n", + "target_y:\t [1 1 0 1 1 1 1 1 1 0 0 0 1 1 0 1 1 1 0 0]\n", + "1-549 219.60% (606m 39s) logloss=17.27 \t accuracy=0.50 \t loss=0.6983622312545776\n", + "pred_y:\t [1 1 0 1 1 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0]\n", + "target_y:\t [1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 1 0 0]\n", + "1-569 227.60% (614m 32s) logloss=20.72 \t accuracy=0.40 \t loss=0.6938793063163757\n", + "pred_y:\t [1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0]\n", + "target_y:\t [1 0 0 1 0 1 0 1 0 0 0 0 0 0 1 1 1 0 0 0]\n", + "1-589 235.60% (622m 28s) logloss=13.82 \t accuracy=0.60 \t loss=0.681753396987915\n", + "pred_y:\t [0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0]\n", + "target_y:\t [1 1 0 1 1 0 0 1 0 1 1 1 1 1 0 1 1 0 0 0]\n", + "1-609 243.60% (626m 59s) logloss=22.45 \t accuracy=0.35 \t loss=0.7026186585426331\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]\n", + "target_y:\t [0 1 1 0 1 1 1 0 0 0 1 0 0 0 1 0 1 0 0 1]\n", + "1-629 251.60% (629m 0s) logloss=17.27 \t accuracy=0.50 \t loss=0.6928611993789673\n", + "pred_y:\t [1 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 0 1 1 0 0 1 1 0 0 0 1 0 0 1 0 1 0 0]\n", + "1-649 259.60% (630m 48s) logloss=12.09 \t accuracy=0.65 \t loss=0.6916486620903015\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 1 0 0 1 1 0 1 0 0 1 0 1 0 1 0 0 1 0]\n", + "1-669 267.60% (632m 20s) logloss=15.54 \t accuracy=0.55 \t loss=0.6855508685112\n", + "pred_y:\t [0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0]\n", + "target_y:\t [0 1 0 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 1 1]\n", + "1-689 275.60% (633m 51s) logloss=13.82 \t accuracy=0.60 \t loss=0.6923184990882874\n", + "pred_y:\t [1 0 0 0 0 0 0 1 1 1 0 1 1 1 0 0 1 1 1 1]\n", + "target_y:\t [0 0 0 1 0 1 1 0 0 1 1 1 0 0 0 1 1 1 0 1]\n", + "1-709 283.60% (637m 49s) logloss=19.00 \t accuracy=0.45 \t loss=0.6926537752151489\n", + "pred_y:\t [0 0 1 0 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 0]\n", + "target_y:\t [1 0 0 0 1 1 0 1 1 0 1 0 0 1 0 0 0 0 1 1]\n", + "1-729 291.60% (643m 55s) logloss=17.27 \t accuracy=0.50 \t loss=0.6858220100402832\n", + "pred_y:\t [1 0 1 1 1 1 0 0 1 0 1 1 0 1 1 1 1 0 1 0]\n", + "target_y:\t [1 0 0 0 1 0 1 0 1 1 1 0 1 1 1 0 0 0 1 0]\n", + "1-749 299.60% (650m 27s) logloss=15.54 \t accuracy=0.55 \t loss=0.68160480260849\n", + "pred_y:\t [1 0 0 1 1 1 0 0 0 1 1 1 0 1 0 1 0 0 1 0]\n", + "target_y:\t [1 1 0 1 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 0]\n", + "1-769 307.60% (657m 30s) logloss=17.27 \t accuracy=0.50 \t loss=0.6910548806190491\n", + "pred_y:\t [0 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 0 1 0 1]\n", + "target_y:\t [0 1 1 0 0 1 1 1 0 0 0 1 0 0 1 0 0 1 0 1]\n", + "1-789 315.60% (662m 46s) logloss=13.82 \t accuracy=0.60 \t loss=0.6959726810455322\n", + "pred_y:\t [0 1 1 0 0 1 1 0 0 1 1 0 1 1 1 1 1 0 1 1]\n", + "target_y:\t [1 1 1 0 0 1 1 1 1 0 0 1 0 1 0 0 1 1 0 0]\n", + "1-809 323.60% (665m 22s) logloss=20.72 \t accuracy=0.40 \t loss=0.6980509161949158\n", + "pred_y:\t [1 1 0 1 1 1 0 0 0 1 0 1 1 0 1 0 0 0 0 0]\n", + "target_y:\t [0 0 1 0 1 0 1 1 0 1 0 0 0 1 1 1 0 1 0 1]\n", + "1-829 331.60% (667m 52s) logloss=22.45 \t accuracy=0.35 \t loss=0.7028436064720154\n", + "pred_y:\t [1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1]\n", + "target_y:\t [1 0 0 0 0 1 0 1 1 1 0 0 0 0 1 0 1 1 1 1]\n", + "1-849 339.60% (670m 21s) logloss=13.82 \t accuracy=0.60 \t loss=0.6839519739151001\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "pred_y:\t [1 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 1 1 1 1 1 0 1 1 0 0 1 1 0 0 0 1 1 0 1]\n", + "1-869 347.60% (672m 56s) logloss=17.27 \t accuracy=0.50 \t loss=0.6827632188796997\n", + "pred_y:\t [1 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 1 0 0]\n", + "target_y:\t [1 0 0 0 1 0 1 1 0 1 1 1 0 1 0 1 1 0 1 0]\n", + "1-889 355.60% (675m 13s) logloss=19.00 \t accuracy=0.45 \t loss=0.6897896528244019\n", + "pred_y:\t [0 0 0 1 1 0 0 0 0 0 1 0 1 1 0 0 0 0 1 0]\n", + "target_y:\t [0 1 0 1 0 1 1 1 0 0 0 1 0 0 0 0 0 0 1 0]\n", + "1-909 363.60% (677m 28s) logloss=15.54 \t accuracy=0.55 \t loss=0.6818998456001282\n", + "pred_y:\t [0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0]\n", + "target_y:\t [0 1 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1]\n", + "1-929 371.60% (680m 31s) logloss=12.09 \t accuracy=0.65 \t loss=0.6927092671394348\n", + "pred_y:\t [1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0]\n", + "target_y:\t [0 0 0 1 1 0 0 1 1 0 1 1 1 0 1 1 0 0 1 0]\n", + "1-949 379.60% (685m 4s) logloss=19.00 \t accuracy=0.45 \t loss=0.69705730676651\n", + "pred_y:\t [0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0]\n", + "target_y:\t [0 0 1 0 0 1 0 0 1 1 0 1 1 0 0 1 1 0 1 0]\n", + "1-969 387.60% (687m 23s) logloss=19.00 \t accuracy=0.45 \t loss=0.6987588405609131\n", + "pred_y:\t [0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0]\n", + "target_y:\t [0 1 0 0 1 0 0 0 1 0 1 0 1 1 1 1 0 1 1 1]\n", + "1-989 395.60% (689m 42s) logloss=13.82 \t accuracy=0.60 \t loss=0.6846868991851807\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0]\n", + "1-1009 403.60% (692m 1s) logloss=15.54 \t accuracy=0.55 \t loss=0.6894447803497314\n", + "pred_y:\t [0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0]\n", + "target_y:\t [1 1 1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 1 1 1]\n", + "1-1029 411.60% (694m 35s) logloss=20.72 \t accuracy=0.40 \t loss=0.7035297751426697\n", + "pred_y:\t [0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 1 0]\n", + "target_y:\t [1 0 1 1 0 0 0 0 1 0 0 1 0 1 0 1 1 1 1 0]\n", + "1-1049 419.60% (696m 57s) logloss=20.72 \t accuracy=0.40 \t loss=0.6977611780166626\n", + "pred_y:\t [1 1 1 0 1 0 1 1 1 1 1 0 1 0 1 1 0 1 1 0]\n", + "target_y:\t [0 0 1 0 1 0 0 0 1 1 1 0 1 0 0 0 1 1 0 1]\n", + "2-119 47.60% (699m 40s) logloss=15.54 \t accuracy=0.55 \t loss=0.6885925531387329\n", + "pred_y:\t [0 1 0 0 0 0 1 1 0 1 1 0 0 0 1 1 1 1 0 1]\n", + "target_y:\t [0 0 0 1 0 1 0 0 1 1 0 0 0 0 1 0 0 0 0 0]\n", + "2-139 55.60% (702m 1s) logloss=19.00 \t accuracy=0.45 \t loss=0.6926258206367493\n", + "pred_y:\t [1 1 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 0 1 0]\n", + "target_y:\t [1 1 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 0 0 0]\n", + "2-159 63.60% (704m 42s) logloss=10.36 \t accuracy=0.70 \t loss=0.6825841069221497\n", + "pred_y:\t [0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0]\n", + "target_y:\t [0 0 0 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0 1 1]\n", + "2-179 71.60% (707m 7s) logloss=15.54 \t accuracy=0.55 \t loss=0.6942364573478699\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0]\n", + "target_y:\t [0 1 1 1 1 1 1 0 0 1 0 0 0 1 1 1 0 0 0 0]\n", + "2-199 79.60% (709m 45s) logloss=15.54 \t accuracy=0.55 \t loss=0.6828181147575378\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]\n", + "target_y:\t [1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 0]\n", + "2-219 87.60% (711m 43s) logloss=19.00 \t accuracy=0.45 \t loss=0.697152853012085\n", + "pred_y:\t [1 0 0 1 1 0 0 0 0 0 0 0 1 1 1 1 0 1 0 0]\n", + "target_y:\t [0 1 0 1 1 1 1 1 0 1 0 1 0 0 0 0 1 1 1 0]\n", + "2-239 95.60% (714m 15s) logloss=22.45 \t accuracy=0.35 \t loss=0.6952451467514038\n", + "pred_y:\t [0 0 1 1 1 1 1 0 0 1 0 0 0 0 0 0 1 1 0 0]\n", + "target_y:\t [0 0 1 1 1 0 0 1 1 0 1 0 0 1 1 1 0 0 0 0]\n", + "2-259 103.60% (716m 52s) logloss=19.00 \t accuracy=0.45 \t loss=0.6940121054649353\n", + "pred_y:\t [0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1]\n", + "target_y:\t [1 1 1 1 0 0 1 1 0 0 1 0 0 1 0 0 1 1 0 1]\n", + "2-279 111.60% (719m 27s) logloss=17.27 \t accuracy=0.50 \t loss=0.694961667060852\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1]\n", + "target_y:\t [1 0 1 1 1 0 1 0 1 0 1 0 0 0 1 0 1 0 0 0]\n", + "2-299 119.60% (721m 56s) logloss=17.27 \t accuracy=0.50 \t loss=0.6933314204216003\n", + "pred_y:\t [1 1 1 1 1 0 1 0 1 0 1 1 1 1 0 1 1 1 1 1]\n", + "target_y:\t [0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0]\n", + "2-319 127.60% (724m 17s) logloss=22.45 \t accuracy=0.35 \t loss=0.7001301646232605\n", + "pred_y:\t [1 0 1 1 0 1 1 1 0 1 1 1 1 1 1 1 0 0 0 0]\n", + "target_y:\t [0 1 1 1 1 0 1 1 1 0 0 0 0 0 0 1 1 0 0 0]\n", + "2-339 135.60% (726m 48s) logloss=20.72 \t accuracy=0.40 \t loss=0.6952366828918457\n", + "pred_y:\t [0 1 0 1 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0]\n", + "target_y:\t [0 1 0 1 0 0 1 0 1 1 1 1 1 1 0 1 0 1 1 0]\n", + "2-359 143.60% (729m 14s) logloss=15.54 \t accuracy=0.55 \t loss=0.690677285194397\n", + "pred_y:\t [0 1 0 1 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0]\n", + "target_y:\t [1 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 1 0 0 0]\n", + "2-379 151.60% (731m 24s) logloss=19.00 \t accuracy=0.45 \t loss=0.7013611793518066\n", + "pred_y:\t [1 0 1 0 0 0 0 1 1 1 1 1 1 1 1 1 0 1 0 1]\n", + "target_y:\t [1 0 0 1 1 1 0 1 1 0 0 0 0 1 0 1 0 1 1 1]\n", + "2-399 159.60% (733m 11s) logloss=17.27 \t accuracy=0.50 \t loss=0.6919983625411987\n", + "pred_y:\t [0 1 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1]\n", + "target_y:\t [0 1 0 0 0 0 1 1 1 1 0 1 0 0 1 0 0 0 0 0]\n", + "2-419 167.60% (735m 6s) logloss=15.54 \t accuracy=0.55 \t loss=0.688313364982605\n", + "pred_y:\t [0 1 0 0 1 1 1 1 0 0 1 1 1 0 1 0 1 0 1 0]\n", + "target_y:\t [0 1 0 0 1 0 0 0 0 0 1 1 1 0 1 1 0 1 0 0]\n", + "2-439 175.60% (737m 3s) logloss=12.09 \t accuracy=0.65 \t loss=0.684729814529419\n", + "pred_y:\t [0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1]\n", + "target_y:\t [1 0 0 1 1 1 1 0 0 1 1 0 0 0 1 0 1 1 1 0]\n", + "2-459 183.60% (738m 51s) logloss=17.27 \t accuracy=0.50 \t loss=0.6940157413482666\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 0 1 1]\n", + "2-479 191.60% (740m 34s) logloss=19.00 \t accuracy=0.45 \t loss=0.6916437149047852\n", + "pred_y:\t [0 0 0 1 0 0 0 0 1 1 1 0 1 1 0 0 0 0 0 0]\n", + "target_y:\t [1 1 1 1 0 1 1 0 0 1 0 1 0 0 1 1 0 0 1 0]\n", + "2-499 199.60% (742m 17s) logloss=22.45 \t accuracy=0.35 \t loss=0.700393795967102\n", + "pred_y:\t [0 1 0 1 0 0 0 0 0 0 1 0 0 1 1 0 1 0 0 1]\n", + "target_y:\t [0 0 0 1 0 0 1 1 1 0 1 1 0 1 0 0 1 0 0 0]\n", + "2-519 207.60% (746m 17s) logloss=12.09 \t accuracy=0.65 \t loss=0.6829585433006287\n", + "pred_y:\t [0 1 0 1 1 0 1 0 1 1 1 0 1 0 0 0 1 1 1 0]\n", + "target_y:\t [1 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 1 1]\n", + "2-539 215.60% (748m 52s) logloss=12.09 \t accuracy=0.65 \t loss=0.6759796738624573\n", + "pred_y:\t [0 0 0 0 0 1 1 1 0 1 0 0 1 1 0 1 0 1 1 1]\n", + "target_y:\t [1 1 1 0 1 1 0 0 0 1 0 0 1 1 1 0 0 1 0 1]\n", + "2-559 223.60% (751m 30s) logloss=15.54 \t accuracy=0.55 \t loss=0.6874823570251465\n", + "pred_y:\t [0 0 0 1 0 0 0 0 0 0 1 0 0 1 1 0 0 1 1 0]\n", + "target_y:\t [1 0 0 0 1 1 1 0 0 1 0 0 1 1 0 1 1 0 1 1]\n", + "2-579 231.60% (754m 2s) logloss=22.45 \t accuracy=0.35 \t loss=0.6899620890617371\n", + "pred_y:\t [0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 0 0 0]\n", + "target_y:\t [1 1 1 0 1 1 0 0 0 0 0 1 0 0 1 0 0 1 0 1]\n", + "2-599 239.60% (756m 31s) logloss=24.18 \t accuracy=0.30 \t loss=0.7059974670410156\n", + "pred_y:\t [0 1 0 0 1 1 0 0 0 1 0 1 0 1 0 0 1 0 0 0]\n", + "target_y:\t [1 0 1 0 1 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0]\n", + "2-619 247.60% (759m 4s) logloss=13.82 \t accuracy=0.60 \t loss=0.6885284781455994\n", + "pred_y:\t [0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0]\n", + "target_y:\t [0 0 1 0 1 0 1 1 0 0 0 1 0 0 0 0 1 1 1 0]\n", + "2-639 255.60% (761m 23s) logloss=12.09 \t accuracy=0.65 \t loss=0.6848553419113159\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 1 1 1 0 1 1 0 1 0 1 0 1 1 0 1 0 0 1 0]\n", + "2-659 263.60% (763m 50s) logloss=19.00 \t accuracy=0.45 \t loss=0.6990257501602173\n", + "pred_y:\t [0 0 0 0 1 1 0 0 0 0 1 0 1 1 0 1 0 0 0 0]\n", + "target_y:\t [0 1 0 1 1 0 1 1 0 0 1 0 0 1 1 0 0 1 1 1]\n", + "2-679 271.60% (766m 23s) logloss=19.00 \t accuracy=0.45 \t loss=0.696912407875061\n", + "pred_y:\t [1 0 1 1 1 0 1 1 1 0 1 0 0 0 1 1 1 1 1 0]\n", + "target_y:\t [1 0 0 1 0 1 0 0 0 0 1 0 0 1 0 1 1 1 0 1]\n", + "2-699 279.60% (768m 59s) logloss=17.27 \t accuracy=0.50 \t loss=0.6896503567695618\n", + "pred_y:\t [0 0 1 0 0 0 1 0 0 0 0 1 0 1 1 1 1 0 0 0]\n", + "target_y:\t [0 1 0 0 1 1 1 1 0 0 1 0 1 0 0 1 0 0 1 1]\n", + "2-719 287.60% (771m 32s) logloss=22.45 \t accuracy=0.35 \t loss=0.7048081159591675\n", + "pred_y:\t [0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 0 1 1 0 0 0 1 1 0 0 1 1 0 1 0 0 0 1 1]\n", + "2-739 295.60% (773m 50s) logloss=19.00 \t accuracy=0.45 \t loss=0.6990195512771606\n", + "pred_y:\t [1 0 0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0]\n", + "target_y:\t [1 0 1 1 0 0 0 1 0 1 0 1 1 0 0 0 1 0 0 0]\n", + "2-759 303.60% (776m 10s) logloss=19.00 \t accuracy=0.45 \t loss=0.6988271474838257\n", + "pred_y:\t [0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0]\n", + "2-779 311.60% (783m 5s) logloss=6.91 \t accuracy=0.80 \t loss=0.6714716553688049\n", + "pred_y:\t [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0]\n", + "target_y:\t [0 0 0 1 1 0 0 0 1 1 0 1 0 1 0 0 0 0 1 1]\n", + "2-799 319.60% (785m 51s) logloss=13.82 \t accuracy=0.60 \t loss=0.6856887340545654\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "pred_y:\t [0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 1 0 1 1 1 0 0 0 0 0 1 0 1 1 0 1 0 1 1]\n", + "2-819 327.60% (788m 46s) logloss=19.00 \t accuracy=0.45 \t loss=0.7051876783370972\n", + "pred_y:\t [1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 0 0 0 0 1 1 1 1 0 1 1 0 1 1 1 0 0 1]\n", + "2-839 335.60% (791m 22s) logloss=15.54 \t accuracy=0.55 \t loss=0.6883013844490051\n", + "pred_y:\t [1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0]\n", + "target_y:\t [0 1 0 0 1 1 1 0 0 1 0 0 0 1 1 1 1 0 1 0]\n", + "2-859 343.60% (793m 51s) logloss=19.00 \t accuracy=0.45 \t loss=0.6974400877952576\n", + "pred_y:\t [0 1 0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 1 0 1]\n", + "target_y:\t [0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 1]\n", + "2-879 351.60% (796m 54s) logloss=5.18 \t accuracy=0.85 \t loss=0.6511715650558472\n", + "pred_y:\t [1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 1 0 1 1 0 1 1 0 1 0 0 0 1 0 0 1 1 1 1]\n", + "2-899 359.60% (800m 12s) logloss=17.27 \t accuracy=0.50 \t loss=0.6897764801979065\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0]\n", + "target_y:\t [0 0 1 1 1 0 1 0 1 0 1 0 1 0 0 0 1 1 1 1]\n", + "2-919 367.60% (803m 27s) logloss=17.27 \t accuracy=0.50 \t loss=0.6790357232093811\n", + "pred_y:\t [0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0]\n", + "target_y:\t [0 0 0 0 0 0 0 1 1 1 0 1 0 0 1 1 1 0 0 0]\n", + "2-939 375.60% (806m 42s) logloss=13.82 \t accuracy=0.60 \t loss=0.6841265559196472\n", + "pred_y:\t [0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0]\n", + "target_y:\t [0 1 1 0 0 0 1 0 0 1 1 1 0 1 1 0 0 0 0 1]\n", + "2-959 383.60% (809m 33s) logloss=20.72 \t accuracy=0.40 \t loss=0.7058143615722656\n", + "pred_y:\t [0 0 1 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 1]\n", + "target_y:\t [0 0 0 1 1 0 0 1 0 0 0 1 0 1 1 1 0 1 0 0]\n", + "2-979 391.60% (812m 32s) logloss=19.00 \t accuracy=0.45 \t loss=0.6926060318946838\n", + "pred_y:\t [0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0]\n", + "target_y:\t [0 1 0 0 0 1 0 0 0 1 1 1 0 1 0 1 1 1 0 0]\n", + "2-999 399.60% (814m 52s) logloss=15.54 \t accuracy=0.55 \t loss=0.6884217858314514\n", + "pred_y:\t [1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 0]\n", + "target_y:\t [1 0 0 0 1 0 0 0 1 1 0 0 1 1 0 0 1 0 0 0]\n", + "2-1019 407.60% (817m 8s) logloss=10.36 \t accuracy=0.70 \t loss=0.6804812550544739\n", + "pred_y:\t [1 1 0 0 1 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1]\n", + "target_y:\t [0 1 0 1 1 0 0 1 1 0 0 1 1 1 0 0 1 1 0 1]\n", + "2-1039 415.60% (820m 20s) logloss=13.82 \t accuracy=0.60 \t loss=0.6895269751548767\n", + "pred_y:\t [1 1 0 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1]\n", + "target_y:\t [0 1 0 1 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 0]\n", + "2-1059 423.60% (822m 52s) logloss=15.54 \t accuracy=0.55 \t loss=0.6950637102127075\n", + "pred_y:\t [1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1]\n", + "target_y:\t [0 0 1 1 1 0 1 1 0 0 1 1 1 1 0 0 1 1 1 1]\n", + "2-1079 431.60% (825m 11s) logloss=17.27 \t accuracy=0.50 \t loss=0.6880634427070618\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 0 0 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 0]\n", + "2-1099 439.60% (829m 39s) logloss=19.00 \t accuracy=0.45 \t loss=0.7105134129524231\n", + "pred_y:\t [0 1 1 1 1 1 1 0 1 0 1 1 1 1 0 0 0 1 0 1]\n", + "target_y:\t [1 0 1 0 1 0 1 0 0 0 1 1 1 1 0 1 1 0 1 1]\n", + "3-169 67.60% (831m 56s) logloss=15.54 \t accuracy=0.55 \t loss=0.7098760604858398\n", + "pred_y:\t [1 1 1 0 0 1 1 1 1 0 1 0 1 0 1 0 0 1 1 1]\n", + "target_y:\t [0 1 0 1 0 0 0 1 0 1 1 1 1 1 1 0 0 1 1 0]\n", + "3-189 75.60% (833m 44s) logloss=17.27 \t accuracy=0.50 \t loss=0.682663083076477\n", + "pred_y:\t [1 0 0 0 1 0 1 0 0 1 0 1 1 1 1 0 0 1 0 0]\n", + "target_y:\t [1 1 0 1 1 0 1 0 1 1 0 1 1 0 0 0 1 1 1 0]\n", + "3-209 83.60% (835m 39s) logloss=12.09 \t accuracy=0.65 \t loss=0.6758695840835571\n", + "pred_y:\t [0 1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 0]\n", + "target_y:\t [1 1 1 1 0 1 1 1 0 0 1 0 1 1 0 0 0 1 0 0]\n", + "3-229 91.60% (837m 55s) logloss=22.45 \t accuracy=0.35 \t loss=0.7095354795455933\n", + "pred_y:\t [1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 0 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 1 0 0]\n", + "3-249 99.60% (840m 15s) logloss=13.82 \t accuracy=0.60 \t loss=0.6910253167152405\n", + "pred_y:\t [0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0]\n", + "target_y:\t [1 0 0 1 0 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1]\n", + "3-269 107.60% (842m 31s) logloss=12.09 \t accuracy=0.65 \t loss=0.6822283864021301\n", + "pred_y:\t [0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 1 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 1 0]\n", + "3-289 115.60% (844m 43s) logloss=25.90 \t accuracy=0.25 \t loss=0.7268241047859192\n", + "pred_y:\t [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]\n", + "target_y:\t [0 0 1 0 1 1 1 0 0 1 0 1 0 1 1 1 1 0 0 0]\n", + "3-309 123.60% (847m 3s) logloss=13.82 \t accuracy=0.60 \t loss=0.6838265657424927\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 0 0 0 1 1 1 1 0 1 1 0 0 0 1 0 0 1 1 0]\n", + "3-329 131.60% (849m 35s) logloss=15.54 \t accuracy=0.55 \t loss=0.6976009607315063\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0]\n", + "target_y:\t [0 0 1 1 0 0 1 1 0 1 0 1 0 0 0 0 1 0 0 0]\n", + "3-349 139.60% (852m 0s) logloss=15.54 \t accuracy=0.55 \t loss=0.6879168152809143\n", + "pred_y:\t [1 0 0 0 0 0 0 0 1 1 0 1 0 1 0 1 0 0 1 1]\n", + "target_y:\t [0 0 1 0 1 1 0 0 1 0 0 1 1 0 1 0 0 0 0 0]\n", + "3-369 147.60% (854m 19s) logloss=19.00 \t accuracy=0.45 \t loss=0.7046571373939514\n", + "pred_y:\t [0 0 0 1 1 0 1 1 1 1 0 0 0 0 1 1 0 1 1 0]\n", + "target_y:\t [0 0 1 0 1 0 1 1 0 0 1 1 1 0 0 1 1 1 1 0]\n", + "3-389 155.60% (856m 19s) logloss=15.54 \t accuracy=0.55 \t loss=0.677700400352478\n", + "pred_y:\t [0 0 1 0 0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 1]\n", + "target_y:\t [1 1 0 0 1 0 0 1 1 0 1 0 1 1 1 0 1 1 0 0]\n", + "3-409 163.60% (858m 10s) logloss=25.90 \t accuracy=0.25 \t loss=0.7202922105789185\n", + "pred_y:\t [1 1 1 1 0 1 1 0 1 0 0 0 0 1 1 0 0 0 0 0]\n", + "target_y:\t [1 0 0 0 0 0 1 1 1 1 1 0 0 0 0 1 0 1 1 0]\n", + "3-429 171.60% (860m 38s) logloss=20.72 \t accuracy=0.40 \t loss=0.6937626600265503\n", + "pred_y:\t [1 0 1 1 1 0 1 1 0 1 1 1 1 0 0 1 1 1 1 1]\n", + "target_y:\t [0 1 0 1 0 0 1 1 1 1 0 1 1 0 0 0 0 1 0 0]\n", + "3-449 179.60% (863m 8s) logloss=17.27 \t accuracy=0.50 \t loss=0.6776196956634521\n", + "pred_y:\t [0 1 1 1 1 1 1 1 0 1 0 0 0 1 1 1 1 1 1 1]\n", + "target_y:\t [0 1 0 0 1 0 1 1 0 1 0 0 1 1 1 1 1 0 1 0]\n", + "3-469 187.60% (865m 31s) logloss=10.36 \t accuracy=0.70 \t loss=0.6834084987640381\n", + "pred_y:\t [1 1 1 1 1 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1]\n", + "target_y:\t [0 1 0 1 1 1 0 0 0 0 1 1 1 0 0 0 1 0 0 0]\n", + "3-489 195.60% (867m 51s) logloss=22.45 \t accuracy=0.35 \t loss=0.6945611238479614\n", + "pred_y:\t [1 0 0 1 0 1 0 0 1 1 0 0 0 0 1 0 0 0 0 0]\n", + "target_y:\t [0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 0 1 0 0]\n", + "3-509 203.60% (870m 3s) logloss=17.27 \t accuracy=0.50 \t loss=0.6905189752578735\n", + "pred_y:\t [1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0]\n", + "target_y:\t [1 1 1 0 1 1 0 0 1 0 1 0 0 0 1 1 0 0 0 1]\n", + "3-529 211.60% (872m 42s) logloss=13.82 \t accuracy=0.60 \t loss=0.6931790113449097\n", + "pred_y:\t [0 1 0 1 1 0 1 1 1 1 0 1 0 1 0 1 0 0 0 0]\n", + "target_y:\t [0 1 1 1 1 0 0 1 1 0 1 0 1 0 1 0 0 0 0 0]\n", + "3-549 219.60% (874m 53s) logloss=15.54 \t accuracy=0.55 \t loss=0.6936815977096558\n", + "pred_y:\t [0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0]\n", + "target_y:\t [0 0 0 0 1 1 1 0 1 1 1 0 0 1 1 1 1 0 0 1]\n", + "3-569 227.60% (877m 4s) logloss=24.18 \t accuracy=0.30 \t loss=0.7091041803359985\n", + "pred_y:\t [0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 1 0 0 0 0]\n", + "target_y:\t [0 0 0 1 0 1 1 0 1 0 0 0 1 1 0 0 0 0 0 1]\n", + "3-589 235.60% (880m 4s) logloss=8.63 \t accuracy=0.75 \t loss=0.6674402356147766\n", + "pred_y:\t [0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1]\n", + "target_y:\t [0 1 0 0 0 0 1 0 1 1 0 1 1 1 1 0 1 0 1 0]\n", + "3-609 243.60% (882m 57s) logloss=17.27 \t accuracy=0.50 \t loss=0.6928107142448425\n", + "pred_y:\t [0 0 0 0 0 1 0 1 0 0 0 1 1 0 0 1 1 0 0 0]\n", + "target_y:\t [0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 1 1 1 1 1]\n", + "3-629 251.60% (885m 15s) logloss=19.00 \t accuracy=0.45 \t loss=0.7027646899223328\n", + "pred_y:\t [0 0 0 0 1 1 0 0 1 1 0 1 1 0 0 0 0 0 1 1]\n", + "target_y:\t [0 0 0 0 1 1 1 1 0 0 0 1 1 1 0 0 1 1 1 1]\n", + "3-649 259.60% (887m 35s) logloss=12.09 \t accuracy=0.65 \t loss=0.6845759153366089\n", + "pred_y:\t [0 1 0 0 1 0 1 0 0 1 0 1 0 1 1 0 1 1 1 0]\n", + "target_y:\t [1 1 1 1 1 1 0 0 0 1 0 1 1 0 0 1 1 1 1 1]\n", + "3-669 267.60% (890m 7s) logloss=17.27 \t accuracy=0.50 \t loss=0.6864821314811707\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 0 1 0 0 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0]\n", + "3-689 275.60% (892m 46s) logloss=25.90 \t accuracy=0.25 \t loss=0.7397301197052002\n", + "pred_y:\t [1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 0 1 1 1 0]\n", + "target_y:\t [1 0 1 1 0 1 1 0 1 0 0 0 0 0 0 0 1 0 1 0]\n", + "3-709 283.60% (895m 49s) logloss=17.27 \t accuracy=0.50 \t loss=0.6964901685714722\n", + "pred_y:\t [1 0 0 1 0 0 0 1 1 0 0 1 0 1 1 1 0 1 1 0]\n", + "target_y:\t [1 0 1 0 1 1 0 0 1 0 0 1 0 0 1 1 0 1 1 1]\n", + "3-729 291.60% (898m 44s) logloss=12.09 \t accuracy=0.65 \t loss=0.6655923128128052\n", + "pred_y:\t [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\n", + "target_y:\t [0 1 0 1 1 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0]\n", + "3-749 299.60% (901m 31s) logloss=15.54 \t accuracy=0.55 \t loss=0.7032931447029114\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "pred_y:\t [1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1]\n", + "target_y:\t [0 1 0 1 1 0 1 0 1 1 0 1 0 1 0 0 0 0 0 0]\n", + "3-769 307.60% (904m 20s) logloss=17.27 \t accuracy=0.50 \t loss=0.6945109963417053\n", + "pred_y:\t [0 1 0 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 1]\n", + "target_y:\t [0 1 1 0 1 1 1 1 1 0 0 0 1 0 0 1 0 0 1 0]\n", + "3-789 315.60% (907m 19s) logloss=13.82 \t accuracy=0.60 \t loss=0.6908108592033386\n", + "pred_y:\t [1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1]\n", + "target_y:\t [1 1 1 0 0 1 0 1 1 0 0 1 0 0 1 0 1 1 1 1]\n", + "3-809 323.60% (909m 46s) logloss=15.54 \t accuracy=0.55 \t loss=0.6973456144332886\n", + "pred_y:\t [0 0 0 1 0 1 1 1 1 0 0 1 0 1 0 1 0 1 0 1]\n", + "target_y:\t [1 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 1 0 1 1]\n", + "3-829 331.60% (912m 25s) logloss=19.00 \t accuracy=0.45 \t loss=0.6773894429206848\n", + "pred_y:\t [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]\n", + "target_y:\t [1 0 0 0 1 0 1 1 0 1 0 1 1 0 0 1 1 0 0 1]\n", + "3-849 339.60% (914m 47s) logloss=15.54 \t accuracy=0.55 \t loss=0.6820051670074463\n", + "pred_y:\t [0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0]\n", + "target_y:\t [0 1 0 0 1 1 0 1 1 0 0 0 0 1 1 0 1 1 0 1]\n", + "3-869 347.60% (920m 18s) logloss=13.82 \t accuracy=0.60 \t loss=0.6797436475753784\n", + "pred_y:\t [0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0]\n", + "target_y:\t [0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1]\n", + "3-889 355.60% (923m 7s) logloss=10.36 \t accuracy=0.70 \t loss=0.6741222143173218\n", + "pred_y:\t [0 0 1 1 0 1 1 0 0 0 0 0 1 0 0 0 0 0 1 0]\n", + "target_y:\t [0 1 1 0 0 0 1 0 0 0 1 0 1 1 0 1 0 1 1 1]\n", + "3-909 363.60% (925m 22s) logloss=13.82 \t accuracy=0.60 \t loss=0.6847935914993286\n", + "pred_y:\t [1 1 0 1 1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 1]\n", + "target_y:\t [0 0 0 0 1 1 1 0 0 1 0 1 1 0 1 1 0 0 1 1]\n", + "3-929 371.60% (927m 38s) logloss=15.54 \t accuracy=0.55 \t loss=0.7047096490859985\n", + "pred_y:\t [0 0 1 1 0 0 0 0 1 0 0 1 0 1 0 0 1 0 1 0]\n", + "target_y:\t [0 0 1 0 0 1 0 1 1 1 1 0 1 1 1 1 1 1 1 0]\n", + "3-949 379.60% (935m 52s) logloss=17.27 \t accuracy=0.50 \t loss=0.6819921731948853\n", + "pred_y:\t [1 0 0 0 1 1 1 1 1 0 0 0 1 1 0 1 0 0 0 0]\n", + "target_y:\t [1 0 0 1 1 1 0 1 1 0 1 1 0 1 0 1 1 0 1 0]\n", + "3-969 387.60% (944m 46s) logloss=12.09 \t accuracy=0.65 \t loss=0.6759116053581238\n", + "pred_y:\t [1 0 0 1 1 0 0 1 1 1 1 0 1 0 0 0 1 1 0 1]\n", + "target_y:\t [1 0 0 0 0 1 1 1 0 0 1 0 1 0 1 1 0 1 1 0]\n", + "3-989 395.60% (953m 30s) logloss=19.00 \t accuracy=0.45 \t loss=0.6978853344917297\n", + "pred_y:\t [1 0 1 0 1 1 0 1 0 0 1 0 1 0 0 0 0 0 1 0]\n", + "target_y:\t [0 1 0 1 0 0 0 1 1 1 0 0 1 1 1 1 0 0 0 0]\n", + "3-1009 403.60% (1268m 54s) logloss=22.45 \t accuracy=0.35 \t loss=0.6969307065010071\n", + "pred_y:\t [0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0]\n", + "target_y:\t [0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 1 1 0 1 1]\n", + "3-1029 411.60% (1271m 31s) logloss=22.45 \t accuracy=0.35 \t loss=0.7028256058692932\n", + "pred_y:\t [1 0 1 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1]\n", + "target_y:\t [0 1 1 0 1 1 0 1 0 1 0 1 1 1 1 1 0 1 1 0]\n", + "3-1049 419.60% (1273m 47s) logloss=22.45 \t accuracy=0.35 \t loss=0.7021511793136597\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Process Process-20:\n", + "Process Process-19:\n", + "Traceback (most recent call last):\n", + "Traceback (most recent call last):\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py\", line 258, in _bootstrap\n", + " self.run()\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py\", line 258, in _bootstrap\n", + " self.run()\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py\", line 93, in run\n", + " self._target(*self._args, **self._kwargs)\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py\", line 93, in run\n", + " self._target(*self._args, **self._kwargs)\n", + " File \"/Users/jiangzl/.virtualenvs/python3.6/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 50, in _worker_loop\n", + " r = index_queue.get()\n", + " File \"/Users/jiangzl/.virtualenvs/python3.6/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 50, in _worker_loop\n", + " r = index_queue.get()\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/queues.py\", line 335, in get\n", + " res = self._reader.recv_bytes()\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/queues.py\", line 334, in get\n", + " with self._rlock:\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py\", line 216, in recv_bytes\n", + " buf = self._recv_bytes(maxlength)\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py\", line 407, in _recv_bytes\n", + " buf = self._recv(4)\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/synchronize.py\", line 96, in __enter__\n", + " return self._semlock.__enter__()\n", + " File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py\", line 379, in _recv\n", + " chunk = read(handle, remaining)\n", + "KeyboardInterrupt\n", + "KeyboardInterrupt\n" + ] + }, + { + "ename": "KeyboardInterrupt", + "evalue": "", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 14\u001b[0m \u001b[0mb_y\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mVariable\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbatch_y\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# batch y\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 16\u001b[0;31m \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcnn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mb_x\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# 喂给 net 训练数据 x, 输出分析值\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 17\u001b[0m \u001b[0mloss\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mloss_func\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mout\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mb_y\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# 计算两者的误差\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 18\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/.virtualenvs/python3.6/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 355\u001b[0m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 356\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 357\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 358\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 359\u001b[0m \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, x)\u001b[0m\n\u001b[1;32m 66\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mword_embeddings\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 67\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mview\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmax_len\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0membedding_dim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 68\u001b[0;31m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfeatures\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 69\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mview\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msize\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 70\u001b[0m \u001b[0moutput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mclassifier\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/.virtualenvs/python3.6/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 355\u001b[0m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 356\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 357\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 358\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 359\u001b[0m \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/.virtualenvs/python3.6/lib/python3.6/site-packages/torch/nn/modules/container.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m 65\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 66\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mmodule\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_modules\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 67\u001b[0;31m \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodule\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 68\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 69\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/.virtualenvs/python3.6/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 355\u001b[0m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 356\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 357\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 358\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 359\u001b[0m \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m~/.virtualenvs/python3.6/lib/python3.6/site-packages/torch/nn/modules/activation.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m 41\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 42\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 43\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mF\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mthreshold\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mthreshold\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minplace\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 44\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 45\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m__repr__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;31mKeyboardInterrupt\u001b[0m: " + ] + } + ], + "source": [ + "from sklearn.metrics import log_loss\n", + "\n", + "Epoch = 5\n", + "print_every = 20\n", + "max_step = len(X_train_d)/BATCH_SIZE\n", + "\n", + "# 跟踪绘图的损失\n", + "current_loss = 0\n", + "all_losses = []\n", + "\n", + "for epoch in range(Epoch):\n", + " for step, (batch_x, batch_y) in enumerate(train_loader): # 每一步 loader 释放一小批数据用来学习\n", + " b_x = Variable(batch_x) # batch x\n", + " b_y = Variable(batch_y) # batch y\n", + " \n", + " out = cnn(b_x) # 喂给 net 训练数据 x, 输出分析值\n", + " loss = loss_func(out, b_y) # 计算两者的误差\n", + "\n", + " optimizer.zero_grad() # 清空上一步的残余更新参数值\n", + " loss.backward() # 误差反向传播, 计算参数更新值\n", + " optimizer.step() # 将参数更新值施加到 net 的 parameters 上\n", + "\n", + " current_loss += loss.data[0]\n", + " # print(F.softmax(out), '---', torch.max(F.softmax(out), 1), 'xxx', torch.max(F.softmax(out), 1)[1])\n", + " if step % print_every == print_every-1:\n", + " # softmax 用来计算输出分类的概率,然后max是选出最大的一组:(概率值,分类值)\n", + " prediction = torch.max(F.softmax(out, dim=1), 1)[1]\n", + " pred_y = prediction.data.numpy().squeeze()\n", + " target_y = b_y.data.numpy()\n", + " print(\"pred_y:\\t\", pred_y)\n", + " print(\"target_y:\\t\", target_y)\n", + " logloss = log_loss(target_y, pred_y, eps=1e-15)\n", + " accuracy = sum(pred_y == target_y)/len(target_y) # 预测中有多少和真实值一样\n", + "\n", + " # 总次数\n", + " loop_step = epoch*max_step + step\n", + " total_step = Epoch*max_step\n", + " print('%d-%d %.2f%% (%s) logloss=%.2f \\t accuracy=%.2f \\t loss=%s' % (epoch, loop_step, loop_step/total_step*100, timeSince(start), logloss, accuracy, loss.data[0]))\n", + "\n", + " all_losses.append(current_loss/print_every)\n", + " current_loss = 0\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.metrics import log_loss\n", + "\n", + "Epoch = 1\n", + "print_every = 100\n", + "max_step = len(X_train_d)\n", + "\n", + "# 跟踪绘图的损失\n", + "current_loss = 0\n", + "all_losses = []\n", + "pre_result = []\n", + "rea_result = []\n", + "\n", + "for epoch in range(Epoch):\n", + " # 优化为批处理\n", + " \n", + " # step = len(X_train_d)/BATCH_SIZE (train_loader 为 BATCH_SIZE 大小的集合)\n", + " for step, (x, y) in enumerate(zip(X_train_d[1384:], y_train_d[1384:])):\n", + " b_x = prepare_sequence(x)\n", + " b_y = Variable(torch.LongTensor([y])) # batch y\n", + " \n", + " out = cnn(b_x) # 喂给 net 训练数据 x, 输出分析值\n", + " loss = loss_func(out, b_y) # 计算两者的误差\n", + "\n", + " optimizer.zero_grad() # 清空上一步的残余更新参数值\n", + " loss.backward() # 误差反向传播, 计算参数更新值\n", + " optimizer.step() # 将参数更新值施加到 net 的 parameters 上\n", + "\n", + " current_loss += loss.data[0]\n", + "\n", + " prediction = torch.max(F.softmax(out, dim=1), 1)[1]\n", + " pred_y = prediction.data.numpy()\n", + " target_y = b_y.data.numpy()\n", + " \n", + "# print('预测---', pred_y)\n", + "# print('目标---', target_y)\n", + " \n", + "# if step>2:\n", + "# break\n", + "\n", + " # softmax 用来计算输出分类的概率,然后max是选出最大的一组:(概率值,分类值)\n", + " prediction = torch.max(F.softmax(out, dim=1), 1)[1]\n", + " pre_result.append(prediction.data.numpy()[0])\n", + " rea_result.append(b_y.data.numpy()[0])\n", + " \n", + " # print(F.softmax(out), '---', torch.max(F.softmax(out), 1), 'xxx', torch.max(F.softmax(out), 1)[1])\n", + " if step % print_every == print_every-1:\n", + "# print('预测---', pred_y)\n", + "# print('目标---', target_y)\n", + " logloss = log_loss(rea_result, pre_result, eps=1e-15)\n", + "# print(\"pre_result: \\t\", pre_result)\n", + "# print(\"rea_result: \\t\", rea_result)\n", + " accuracy = sum(np.array(pre_result) == np.array(rea_result))/len(rea_result) # 预测中有多少和真实值一样\n", + " \n", + " # 总次数\n", + " loop_step = epoch*max_step + step\n", + " total_step = Epoch*max_step\n", + " \n", + " print('%d-%d %.2f%% (%s) logloss=%.2f \\t accuracy=%.2f \\t loss=%s' % (epoch, loop_step, loop_step/total_step*100, timeSince(start), logloss, accuracy, loss.data[0]))\n", + "\n", + " all_losses.append(current_loss/print_every)\n", + " current_loss = 0\n", + " pre_result = []\n", + " rea_result = []\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "一,train loss与test loss结果分析\n", + "\n", + "* train loss 不断下降,test loss不断下降,说明网络仍在学习;\n", + "* train loss 不断下降,test loss趋于不变,说明网络过拟合;\n", + "* train loss 趋于不变,test loss不断下降,说明数据集100%有问题;\n", + "* train loss 趋于不变,test loss趋于不变,说明学习遇到瓶颈,需要减小学习率或批量数目;\n", + "* train loss 不断上升,test loss不断上升,说明网络结构设计不当,训练超参数设置不当,数据集经过清洗等问题。\n" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[]" + ] + }, + "execution_count": 35, + "metadata": {}, + "output_type": "execute_result" + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYQAAAD8CAYAAAB3u9PLAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzsvXmcZEd5JXriLrnW2lXVkrpbK2pJNJYQIDbjhWVky2Zs/IZnFg+MsZ/x85vBHhsbP+znwX68wWOPwfazB9sPA94wYMb2YPEQSIARCCEJSRjtW6sXqRepq6urqyozK/NuMX9EfHEj4t6bmZW1dLX6nt9PP0lZmTfvvXkjvjjnfN8XjHOOEiVKlChRwjnTJ1CiRIkSJbYHyoBQokSJEiUAlAGhRIkSJUpIlAGhRIkSJUoAKANCiRIlSpSQKANCiRIlSpQAMGRAYIzdwBh7jDG2nzH23oL3vIkx9jBj7CHG2Ce113+XMfag/OfN2uuvY4x9mzH2HcbYNxhjl6//ckqUKFGixKhgg+oQGGMugMcBXA/gCIC7AbyVc/6w9p69AD4D4LWc80XG2E7O+QnG2OsB/CKAHwJQBXArgNdxzpcZY48DeAPn/BHG2L8H8DLO+Ts2/ApLlChRosRQGIYhvAzAfs75Ac55AODTAN5gveedAD7MOV8EAM75Cfn6PgBf55xHnPM2gPsB3CD/xgFMyP+eBHBs9MsoUaJEiRLrhTfEe3YDeFr7/yMAXm695woAYIzdDsAF8Fuc8y8CuA/AbzLGPgSgAeA1AIhZ/AyAmxhjqwCWAbxi0InMzs7ySy65ZIhTLlGiRIkShHvvvfck53xu0PuGCQjDwAOwF8CrAewB8HXG2NWc81sYYy8F8E0A8wDuABDLz/wSgB/mnN/FGHsPgN+HCBIGGGM/C+BnAeCiiy7CPffcs0GnXKJEiRLnBhhjh4d53zCS0VEAF2r/v0e+puMIgBs55yHn/CCE57AXADjnH+CcX8s5vx4AA/A4Y2wOwAs553fJz/8dgO/O+3LO+Uc459dxzq+bmxsY4EqUKFGixIgYJiDcDWAvY+xSxlgFwFsA3Gi957MQ7ACMsVkICekAY8xljM3I168BcA2AWwAsAphkjF0hP389gEfWeS0lSpQoUWIdGCgZcc4jxti7ANwM4Q98nHP+EGPs/QDu4ZzfKP/2A4yxhyEkofdwzhcYYzUAtzHGAOETvI1zHgEAY+ydAP6BMZZABIif3oTrK1GiRIkSQ2Jg2ul2wnXXXcdLD6FEiRIl1gbG2L2c8+sGva+sVC5RokSJEgDKgFCiRIkSJSTKgFCiRIkSJQCUAaFEiRIlcnHngQXsP9E606expSgDQokSJUrk4Nf+8QH8ya37z/RpbCnKgFCiRIkSOQiiBEGUnOnT2FKUAaHEcxqdIMKpdnCmT6PEWYg44YiTsyctfyNQBoQSz2l88ObH8baP3jX4jSVKWIgSjqgMCCVKPHew0O7hyGLnTJ9GibMQCS8ZQokSzynECUerF+FsqsgvsT0QxUnJEEqUeC4hTjgSDrSDePCbS5TQkHAgTvJN5STheMtH7sBXHnl2i89qc1EGhBLPaRDlX14Nz/CZlDjbECccUZzPEMIkwZ0HTuHBo8tbfFabizIglHhOI5FS0Uo3OsNnUuJsg2CX+QGBiEMRgxiEVi/Cn33tSSTbTJIqA0KJ5zRIA17plgyhxNoQ8+Iso0gGgnhEb+q2x+fxO194FI+fWBn5/DYDZUAo8ZyGkozKgFBiDeAyw6goy4iIwaimMwWSMCoZQokSW4ZSMioxCmieL/IQaEIfVfKhQBOOKDltFsqAUOI5DRrQy2VAOGuRnIGKYfq+ou+l10dlCKQ0bbc6hzIglHhOgxhCmWV09uI/f/4R/OTHv7Wl35lO+AVpp+tkCPT5MC4ZQokSW4Y4KSWjsx1HFjt49JmtNV+VJFQw3ysGMaKpPEiSOlMoA0KJdeOphQ4WWr0zfRq5oPFWmspnL+KEY7ETbGmK5iCGMEhSGgRiCKVkVOI5h5/7xL34r1987EyfRi4oT7xkCGcvIukhbGVQVxN+kam8zoDAS8moxHMVS6vhtl2B03gr6xDOXtCku5VtzAeZxiQVjWoqJ6WpXOK5il60fZuAkcxQmspnL0i22cqAMEjSoedq/Wmn22vcDBUQGGM3MMYeY4ztZ4y9t+A9b2KMPcwYe4gx9knt9d9ljD0o/3mz9jpjjH2AMfY4Y+wRxtgvrP9ySpwJhHGCaJtRX0JUSkZnPWjyXNjCgBBtMkMgyWi7jRtv0BsYYy6ADwO4HsARAHczxm7knD+svWcvgF8D8CrO+SJjbKd8/fUAXgzgWgBVALcyxr7AOV8G8A4AFwK4inOe0GdKnH0ItjNDkKdVBoSzF9EZkIwGMQDKDirqdTTw+JRltM3GzTAM4WUA9nPOD3DOAwCfBvAG6z3vBPBhzvkiAHDOT8jX9wH4Ouc84py3AdwP4Ab5t/8DwPs554n1mRJnGQRD2F4PNqFsXXH240x4CIMYQqJW+OvLMtpu42aYgLAbwNPa/x+Rr+m4AsAVjLHbGWN3MsZo0r8PwA2MsQZjbBbAayBYAQA8D8CbGWP3MMa+IFlGBoyxn5XvuWd+fn7Y6yqxRUjkNoPbzRwj0Hl1gnjb0fMSw4EmzTNhKg+qVF4vQxi1W+pmYaNMZQ/AXgCvBvBWAH/OGJvinN8C4CYA3wTwKQB3AKCdSqoAupzz6wD8OYCP5x2Yc/4Rzvl1nPPr5ubmNuh0S2wUAjnJbreeLAR9QLd6o8lGp9oBPvzV/eWua2cIZ4IhqBX8gErl9aedbq9napiAcBTpqh4A9sjXdBwBcCPnPOScHwTwOESAAOf8A5zzaznn1wNg8m/0mX+U//0/AFwz2iWUOJOggLDdqC8h5hwVVzzmy6ujBYQvPfwMfu/mx/D0qdWNPLUSQ4Im5S01lZVHkO8jxOvsdjoo4JwpDBMQ7gawlzF2KWOsAuAtAG603vNZCHYAKQ1dAeAAY8xljM3I16+BmPRv0T7zGvnf3480UJQ4ixBEMiBsU8koSTgmGz6A0X0EMqSDUnI6I6BV+OIZYAhAfnuKjZKMttu4GZhlxDmPGGPvAnAzABfAxznnDzHG3g/gHs75jfJvP8AYexhCEnoP53yBMVYDcBtjDACWAbyNc07LtN8B8LeMsV8C0ALwMxt9cSU2H6FiCNtzsowSjvPGKphf6Y2cadSR+zFvt6rScwVnIstIl4LihMN38/8+KjNe7+c3CwMDAgBwzm+C8AL0196n/TcH8G75j/6eLkSmUd4xTwN4/RrPt8Q2AzGE7WoqJwnHVKMCYHSG0Jbew3YbvOcK0jqEreuXFVkBIXNOfH0Mga+zjmGzUFYql1gXwu1uKnOOaSkZjcoQyIzertf4XAdNmt0wQSfYmnoSfaLPm7TJV1hv64rtxqzLgFCALz38LHpRPPiN5zh6xBC26eo5Sjim6pIhjNi+ghhCGG2vwXuuIE44Kp6YqhZaWyMb2ZJR0d91w/l0J8Bv3fjQUPNGUjKEswcHT7bxzr++B19+uKyVGwRKm9tuPVkIScLRrApltDfihN6WHsJ2G7znCqI4wc7xKgBgsbP1ASEvEyivdcWdB07hL795CI8NsXdDuR/CWQTqjLlUNkQbiO3uIcSco14Rj3kwakAghrDN6P3ZhnYvQjdcO+uOEq4Cwlalng7LEPLeN0xtAT+L007POXRD8SNtlV55NkN5CNtwskwSDs4B33XgOgxBPJoEmAaE7Rn0zhb81F/ejV//xwfW/DkREGoAgFNnQDLKW8XnBQSa3IcZC6VktM3BOccTzwqqR6sYSjcsUYztzBCI1ruMoeI6IzOElsoy2n5B72zB6U6Auw+dwvwIO+vFCcdEXch+nREYxijQn+e8TCJVqczzGMIwAUH8e7s9U2VAkLjzwClc/wdfx/4TKyogtEuGMBCky283LRRIB6jjMFS80QOCqkPYhkHvbMGdB06B87UzSc5Fn6yqJwoBtmobzXhAllGeqawa4g0xFpI1vHcrUQYEiWeWRVuCk60AXTlxdHrrX40stHp4/+ce3paSykZgO6ed0irOo4Aw4m/QKrOM1o07njwJYO1Mkt5PWUZbJbEM6yHo50OT/DDPWSkZbXNQjvpqGG8oQ7j9yQV8/PaDQ2UenI2gVTcv6PlyJkGDzXWEZDRKlhHnPC1M26SgJ7yO7XXvNhrffHIBwNp9GPoNqzIgbFV30EEeAk3oeQxhTZLRNltIlQFBggJCN4jVxLG6AR5CTwaXUbIrzgboD/92Ywk0WB3GUPVGCwjdMFGDd7NM5Z/8i2/hA59/ZFOOvR1wYrmLJ060APRnCK//o9vwl7cfNF6LVUBw5f9v0klaSHK8AR15+yWszUMoJaNtDWprsBrGahJvb0BAIPlp9TkaEHR6vN2MZTofzx3dQ9BZ4mbJfk+eaOHQQntTjn2mcHxpFZ+//zgA4J7DiwCACyZrhfewG8Z46NgyHji6bLweWZLRVjEEfaLOW8WrHdVyvIbh0k7Nz2wXlAFBQjGEMEmzjEbsn6+DgstGsI3tCH2S3W5pmbHGEEYOCNozsFmruVYvUqnOzxV85u4jeNenvo0gSrAgM4su3NEonABPyvfY/YrijGS0WWdsQjeV87KM8jyEeIS00+3mLZYBQcL0EKgOYQMko5IhnDGotFNHSEajBAR9U53NkMQ452gH8XNOUuyEETgXRZ7LcmztaFQKn5ETKyIQ2B1NaXXubzMPgV6K8zyEIZ6z9W6ws1koA4IEVSd3NVN5IwrTKCBs1YDnnOOvvnkIRxY7W/J9+iS73XKqY91UHjHLSF8UhNHGD95elCBOOLrPsb5Z9FwsdyOsdCNUXAeNqlu4Ip6XAcHuVaRkP4fBc9i2yTIiycioV1iDZKQ22NlmrLoMCBKKIQSxGpwb4SFQoyuaWO49fGpTs3HmWz385o0P4W0fvUtR9UH45F1P4Uf++BsjfZ8+wLebHqoCwjoK03SGsBkZIXT87SgZPTnfGjn7iRZCK90QK90Q4zUPnsMKV8TzRQwhToO667DczWo2A4O6nUY5AYFeG2bhUbau2OZo5UlGG+IhpJLRE8+u4I1/egdulznZmwGa9A4tdPDv//bbQ33mwWNLePDY0kiD32QI4vOdIMJP/+Xd2C8zS84UMgxhnR7CZngkbRUQthdDOHSyjdd96Gu4Q6aLrhX03C+vCoYwXvPguU7hPSTJaDWMDWauMwTXYVvWVVdfuecyhBzJZy2b3pR1CGcQdx5YwNcen+/7nlzJKIzXnR+uJKMgxklJh0fd23cY0IDbPVXHXQdP5T7MB0+aGS3LqyE4H22VGuZkY9x/ZAn//OgJfHMTA98woEEnKpXdkSQjMyCcOwzh2JIo1By1mRwx45Qh+PAdVugBEEMATNmIJkxP9qPaqgl0EENQvYxys4yGr0PYbokY50RA+LOvPYkP3fJY3/fkmcqjTpI6aGCshvGWdM2kY8+OVdT36njw6BJe88Fb8cCRJfUadXUdxTPRc/tpQByYFwHnxPLW7XCVh0hbXY4qGbVltbrDNscjoeP3thlDoL0jRm33kXoIwlQer3lwHadw9awHBF02sj2EUXcoWyuGrVTWCzJHqUPYKpN8WJwTAaGfdgmIH7QVpNRd3+BivdXKNGF2glgdix6Y2/efVMxko0ADcaLuy+81z5/S++ZbXfXasgoIa5+UjMI0+d9Pzgup6MRKN/czScJxz6FTA49962MnVMPBUWCnnY5SmEZBfKLuI9gMyYieu21mKtMiYdTFS+ohRMpD8F1WmKk1v9LFeE00sNMDArFO8hC2iiEM2kLTKFyzisyG8xBgfGa74JwICO6AgNAKIvUDrVopgMPWD3z0tgN4x198K/O6qkMIY7UaDGOOVi/C2z92F/77PUeGvYyhQAN4UgYE+/wDVYWdPrSUFjhKQNBXkHSP04CQzxC++eQC/tc/uwOPD5jsf/0fH8BHbzvY9z39QHNPmna69utrBSJDpu6762II7V6E137oVtxtBUK9tfZ2SkFcf0AQ93p5NZQegt93HM6v9PD88ycApIsWIOshbFV7FLMlRfYe5AWMtVQflx7CGcSglYW+164uGQHDM4SHjy3jXlmRqUNPO9V74nTDGAkHTm/wJjykSU7JfYTbVoM+Wr3oUtJ6JCOTIVgBoUAyOr0qVoB2RomNXpSsaxtTWrmtp7lduxehWXXhu866Bu+zy10cmG9ngqDuUWwnY5meiVFZkZ12qpvKti/HOcd8q4crzx8HYDMELcuIbSOGkGcmr6kwTX5PzntvfugZQ0LbSpwjAcHpu7LQZZtVWancqIjeKe1ejH/6zlE8s5QvfxB6UYJ2L8o87CrLSJOMgihRD81GZDLpyDCE0Dy+nvUEiMFIktEo1dQ2Q+iGMY4sCkOyiCF0tXvS99hxsq6W06TPOpqHsNYkgU4vRrPqwXPZyN1SgTQw20VLLS1gDxMQfuvGh/C3dx0e+TyGxdI6PQRaCC2thmj1IkzUfHgOA5BOhoTTnRBhzHHJbBNVzynwEBy4bn+mDwD7T6zgO0+fHumcdQxrKgPpwoNeG6rbKfkN1rHbvQj/+9/ci7d/7C7VTmcrMVRAYIzdwBh7jDG2nzH23oL3vIkx9jBj7CHG2Ce113+XMfag/OfNOZ/7I8bYpuYnuqw/NSOG0Ky46IWiDmG6IUzZ+ZUu/uOnvzNwEPYiseK3Teg8UzlKuKKVG1HroCOwAkIRQ+gGaX0E3ZuRJCO9DiFOcGihDc6By2abWGj3cldAw25AFMbJulpO01e70kNI+NopeqsXoVnx4DvO+iSjIP3tjdd1hjDEtX75kWfxjSc2P3uLMuHW6yEcOy0WB4IhsNxj0qY5O8ermGlWVDYeYNYheI4zMCB88ObH8d5/uH+kc9Yx0FTWPYTYlIqGyRwqqlSmsfHoMyt4+8e+hT+5db+6h1uBgQGBMeYC+DCAHwKwD8BbGWP7rPfsBfBrAF7FOX8BgF+Ur78ewIsBXAvg5QB+hTE2oX3uOgDTG3MpxXAHPEjEEHZO1JRkNCOzdJ4cMmOGBkDLWvHrnVNp8g+jRE2k7Q1mCPRQTtXF+duTru5pAOa+0aPsRhVECXw50KOE48kT4n69/LIZcJ6ftpga7f2vPYz5uiQCvQ6BeuFQb51h5Zl2ICUjjw3Uhv/01ifxL09lZUMgvVZ79bhWySiME/Wb3vTAcfz9vakHxTnHBz7/MO4/sv4V8kZ5CJS+qjMEeyySPDI3XsWOsQpOtXM8BJfBYYNbPXTCGKc7619Z5xWc6Uj6MIRhFg5FvYwomHzv3lk8u9TFf/3iY/jTW59c49mPjmEYwssA7OecH+CcBwA+DeAN1nveCeDDnPNFAOCcn5Cv7wPwdc55xDlvA7gfwA2ACjS/B+BX138Z/TEoy4gYwtx4VZnKO5oyIJzonzFDICnGnuD1XkZ62qmSjDZ4VzY6blGWke0h6LR0FPkqiBPUfSGvRUmi/INXXLYDQH4g7VpBKQ9xIkzW9aTo2oVpgAgIb/zTb+K//fP+oY7RJsnIGexB/MGXH8ff3f107t9IGrKDSmuNASGKuZLaPnHnYXz0tgPa5xP8+W0H8ZVHThR9PIPFdoCbH3om8/p6JSP63PHTYtyISmVHXYMOGltz41XsaFYLs4w8xxlY2RtGyYZILXktKYy/57THHmk/BOte0GffcO1u3Pnrr8Nls00sdrZmH2lguICwG4D+lB+Rr+m4AsAVjLHbGWN3MsZukK/fB+AGxliDMTYL4DUALpR/exeAGznnx0c//eHgDDCVl/WAEIr9EFRAGJAxQ6C0wQxDCLOSUZhw1RfHlnTWC3qgyFTOMgRTv1/SVlOjpp02KiJdMIo5npxvYfdUHRftaADID6R6Ku6g6+g3uO45dCpzv3Wkze3S9slBnODYUhdPD9nrqU2SkTuYIYRxovwTG52CGhSTIQyeSMI4QUf6Qu0gNhjeSo+M4OEn8T/72pP4uU/cm1k4qDqEdUpGNO7Ga75iknbqKTGEneNVzDYrBqvMVCoPOJ1AMqj11voMu4Wm/t/0mWGMeF4gGdF9o3s1VvOMpJfNxkaZyh6AvQBeDeCtAP6cMTbFOb8FwE0AvgngUwDuABAzxnYB+HEAfzzowIyxn2WM3cMYu2d+vn+1ceHJDShoUZKRZAhBlGBHw5SMBrn+AxlCkBjGYrBJDIFWZoVpp+QhKIZgZliN8n1kwEcJx0IrwHkTVeycqAHID6S9ITwEGtBFk3AniPDmj9yJT9xZ7O3oG+RUXEd9ZxAlxkTaDyLLyJNZRsWTTJxwcA4cLdB7lVxoXY/uIQ1TnBYlXN231SAyroPar6zFd7njwEJuAea6JSPreFSYBmQnwaXVEK7DMFb1sKNZya1UVr2MBjGEOK1/WA/ihKtnJu87cwMCeQhDdTuV52sdm86fvnu85vVd9Gw0hgkIR5Gu6gFgj3xNxxGI1X7IOT8I4HGIAAHO+Qc459dyzq8HwOTfXgTgcgD7GWOHADQYY7kcnnP+Ec75dZzz6+bm5tZwaSlch/XV9Va6ETyHYapeUQ/gtGQI9GOcbPX6yk69Aoag5JEgMoxF+uE3+semCUeZykGxhAVYHsJIaaccdS0g9KIYNd/F3FgVQL5klAbJ4u8LlUGX/7strYaIE45DJ4s3lkkrlR3FEE53qH3I4IAQJxxLqyHGqi481+m78qPzPLq4misxFFWpt3uRGvzDFKdFMVcJAe1ejE4QZ569YVf1y90QDx4VFeu6NMQ5TwPCiB1e7XPoZyqHMYfvMjDGsGOsgtUwVgsZI8vIYRi0+FbpriOmcx9Z7KAbxiIg9NnHWb+E0SQj8V5761kVEOR3j1f9DS9e7YdhAsLdAPYyxi5ljFUAvAXAjdZ7PgvBDiCloSsAHGCMuYyxGfn6NQCuAXAL5/zznPPzOeeXcM4vAdDhnF++IVeUA9dhmVQ3AHhqoYM/+soTWF4VlZS00gWARsVV2jggIrq9eYeOgaZyGBsDNvUQ+k8Cf3/vEfyXLwy/vSIdt+o5qHpOYWEafS8NfMZGL0xTDCFO0IsSVD0xAU83/FzJaJgso1Qyyp8BaAVYJNEAWqWyk26wQobjMAzh7kOn0A5iXHfJDvgDFhV0vkGcqKwZHWoxkBMQKIFhkGTEOReSSEFiQKsg6BTh7oOn1LjQA4KeeTaKZBTFoqU3VR4DQjIqMpVFYoL4fWbkQuyUDNxrZQh0vqP4CEnC8UP/7234xJ2HkfA0IOQ11MurVFYb5AyRCKHfA50l0G/nawxhW0lGnPMIQu+/GcAjAD7DOX+IMfZ+xtiPyrfdDGCBMfYwgK8CeA/nfAGAD+A2+fpHALxNHm9LIfqoZx+kD391P37/S4/jq4+ewHjNR00LCDXfVRMdPcj9Mo1o4tc9gShOECUcNV+kPNJkFGp1CIOyjG556Bl87jvHhrlMcWx6oDwHjYqbwxDMPZ5pJTXTrI5WhxAnqGseQhAlav/bufFqrtQ2zJ7VSjIqmABo1VQk0QDpoNVNZTLoloZoMHjjfcdQ91287vk7hWTUlyGkf8vbi6IjnwubZbR6kfKrBpnKNIl0FEMQ10A+EElGw7bouPNA2sk0iLOFiuL1tQcE+n5iiQBUYRqQDfJRkgYEepZShiCORR7CIB8nZQij9eVa6UZYaAeI4lQyGtZDWMsGObqCrV9TIBlZGhD87RUQAIBzfhPn/ArO+fM45x+Qr72Pc36j/G/OOX8353wf5/xqzvmn5etd+do+zvkrOOffKTj+2EZdUB6cnCyjbhjjpgeFn31sSfRR0RlBzRcbegDAVReICsq8lR9B7cOsTfA0mCgFdFFb9dAP3w76d1Q9vRr2HeC/fdMj+JNbU7WNvrPiOmhUvKypbE3GS6shxqsexmvemmoiDi+0ld/S8HXJKEHVF4/VzvFarocwHEPon9NNg6RIogFMQ7LiinNclJOn6PDaXwL6wgPHcf2+89CoCLmj38pbX/nnsRZVg5JhCDFm5MQ5iCGoVXuUIIgS9VuezjCE4WSeOw4sgIm1jvHdekAYpQ6EzmtWXlfFdVDz3UKGEEZcmag1GbjpGTHrEAY3twvXwRCIcQVRgphzxUr6baGp//coze0AM+CkkpG4H+QhbFVbk3OiUjkv7fTWx+ax0o1w+U4RizIBwXPRlKuVa/ZMAQDmh2AIumRExhpl/NAzoEtGsZxEi7DUCfuuHL/w4HHc+mhqtofaCqNRcTOrcNtDWO6GmKj7qPtuX01fR5xwvP6PvoGP337QMpUT9MJYyTM7CxjCMGmng7KMKCAUSTR0nkDa3A5IPYQgTvre99v3n8RiJ8SPvHAXAHE/+22hGQwKCEGxhzA7JEPQP6unZhLzVJLkEF5ELxKb2r9g10Tm/JfXyRBolT47Lq5roi7GEQWEjIeQJColtSbHIDFZvQ5hmOZ2FAxH8RDoeQyl5NWvod6GBgTt/XTvdMkI2HivsQjnREBwmPAQ9BXhP33nKGbHqvjAj30XAEHNan56O3TJ6JrdkwCKaxFIGgJMhkATDhm85vtNzVbHB29+DB/+qlj1n14NCitYOec4sdxTeisgJmWHiRVVo5pd9Qd2QFgVAaFRcY3z+IVP/Uthy/BT7QCtXoSnT3UQxklqKsciuNHkOzNWyfVdhpGM6DyLJAKdRhdtF5pXh6DndPfzEe49vAjGgO+7YhYABqadRoZklA0InZwsI7GfsuYhDJjI9e/QG8BlPYTBq8mVrmjouGdKpAfrHgIdr+o5I2UZ0WRODGG8Jp5/X2XtWJJRnDKEqmIIZtpq6iEMKRmNwhACMyB4sn/SwErlTEAQ/Zo+d9+xwl5c+iHjHIZA92pC3rutMpbPiYBgU9UoTvCVR0/gh68+Hy+7dAeu3j2Jy3eOGQyh6jsqv/7imSYmal5h6qm+itI1e3oYqA0GIYy5kb1h+whffewEvvzIswDE6q+oQGulF6EXJVjUVotBnOqxjZxVv75hDyC01sm6h7oVEL791CK+XFDgRPdhfqWHKOEmQ9A8hMm6j26YbVCXbkDUL8toEENIB0iRsRzrHoJLASH9XL+AQOY4XYtozNZHMkp0hpDdUUFtAAAgAElEQVQNUHmG76pscDjdHM5U1j+7YDAE8d/kIQxTTEbPHH13XkCYG6+OlGVkS0a0ynUdqmjPplrSM0sMgZ4RPctoUIEpkI7FYdOKddB39qIECedwaB/nPFM5p5JZzzLaf6KFn//UvxR2M9YXp7oJHVgBYUzeu63yEc6JgOBqrRUA4PhSF0GU4Lt2TYIxhs/+h1fh/7zhqkJT+YLJGnZO5OvhgJlzrTcro4FBkhEhjJPCIAKIB3OhFci9Gcy6AR1kcp9eDdNmWVFqhjWrbraXUWTKNUurISYlQ9BX7KtBjP0nVnInQZJonlkWjEmZyjLtlFZ5VC1tG3zDFab1p9+tXqT078KAoLeukOxPD562rPCntz6J//jpfwEgJki6jwDgO6zvyps8oYrr4GgeQ+ilq0/9GgBgvOqh6jkD6xD0iePkSjFDGEbmoQlmpk9AmB2rorcOyWhu3AwIadqp5SHEXBnOaUDIMgTHWq3fe/gUfvAPvq6eW875ukzlbpiyuCiWDMHNz2zSGYLdlyiME1Xfoxv3OvS4pktGdG9oDJWS0SbAZdRlUdzsp06JFdyeHXXxd7lyyXgIVfFjnD9Zw87xanH3Tm0F3NJWrhQoJnMCgj4x2JN2NxT9dvSeLHmrR1qpxwlXAzyME/hemrFh6/S2ZLS0GmKi5gsDWluxt4MIYcwz220C6WR0XHaApcApurhygyHQd5jXR7UZ60s7Ha96mB2rDA4IbDiGcNsT87jnkOhFFMSp9AVAZhkNZggX7qjj6OnVjGGd19yOgkSz6qHmuwM9hMhgCOmzeNrKMspjCCvd0JBRaIKhDCfbQ2BM/G00U9mSjKriOfAKCtPCOEGFTGXfNJX1LCPP6nb66DMreOzZFSWf6fd2PaZySAyBSYYwwEMgBhFpkhGd/10HT+UmLwwylfUsI6CUjDYUKVUVN/5pGRAunG4Y77OzjObGq9g1WUPNd2VAyPcQdIagT+7dHMmIMSkZGQEhmxraDmI8u5x+Xy5D0M6HfARBv8X1Nnw3U2yWMo4EScKx3E0ZAk1Qoo21eN8jx5cz30sMgYxNum+04qfVuGII1sM8DEOgCaoo7XRZ7tO7e6pe6CHoeyrTimupj4dwfKmrvje0GEK/DeKBdCBfOttEL0qMjp1A+hvrkzVNyiIgOENIRrqHoElG8jpW+tQh/Orf3493/12a5EfBg/wLXdajRcLIHoK8jvGah4rn5DAE85hRkmQYgt36Io8h0H/Tuev3diRTOaDUYNNUHpRlRH9PNIZAz/b8Sg8HchZVSU5Aoc8CaeuK8VIy2nhQQKACk6cXO3Adhgsma8b76pZk9POvvRyf+blXAkhz6vOifS9nkAPpwJjWGMJEzUcUJ8bgtidtmhgOnEy7gudlxOTtQxvEacZGo5pO8gRjQgoidII4YyrrrOKxZ7K7mtleSsUT+i5du5KMakUMYYg6hChlCHn3vCU3Xdkz3ciVaIB0oHmGqZyeiz5pcM5x7PSquj+BxrQA9N3+kc4TAObGxTO1tGoFBHmt+mqQgsQYMYQBprI+kdKquOo56v7mBR3CsaWuwXCJsewokIwm6z4qI5vKiTq36/edh1dcNgMg6+Wp6zLSTi0PQfsN7dU6/b70POnnujzCBKpnGUUyIHgF+0DHCTe6/Ip/p+ehj6E82ciQjLTnSmUZWZLRKNczCs6JgKAeRE4MYRW7p+pqVUKo+WZAGK/52CNZxPmTdXTDxJhQCLRCGat6uabyZD1lCFMNsTevPgBtyYg+R62kgXyGoE/MpI2HcVph2ai46IRmnYMeWE5IBjJZ95W8lCTc6Ho6VEBwHXguU5+zJSN7taZWdHFSKMPoATOPsq+ogFDHkdP5tQg6Q6B7shrGGJNSoF6cttgR9R66mW14CK4DzovbL9PnSJPXjx3Gifq9DWYon5VGxUXNG0YySr+bFgC7puqK9fTzENq9yGCytOIsCggTddG/aZRup2lAcPHhn3gx3viSPQBSySjjIWiFacQuKTiaWUZO7sqcrrcfQ/jO06fx9o/d1XcRYpvKrsPgOMV7KtM507OXbqWZMrCK6+DOA9n9wxPOlQdmFKZZvYzKLKNNgN1U66lTHVwo/QMdRpaRZ96aK84T9QqPPpOVUGgA7GhWctNOdVN5qk4MwZwYHjy6hPmVnswoEudpMoQ8yaingp2SjLT9CRoVL1PnEESJehCPytbEUw1f+QDdKFZMwXcZHh0iIPieA89xlKFO964wIITpZFu0B4O+aspboa30QoxVPVw800QQJarvvo48DwEAJmoexqqewVxoExLFECLTQyiSO+xzpAl2xWgrnl6jPhnS/RobVjLS7gkxhF1TtbQwrU9zu04vMgIFBY+ZptD5ezkMwR/Qv6kI9KxWrDFE9zDPQ1ABIZN2msB1RJ8j15qcKVhQoKPrcx2WkSnvOXQKtz1xMrOntQ497TSKOVxGLbfzGYJqbWFlGQEpK37ppdO5e2QkPPUJ9Gc9tCqVq54D32WlZLSRoLmAfrgji52MfwCICZDkJZ0tAMBVcgPwR49nJ0h6IGfGKqZklOMhTDYqylSWX4VWL8LbPnYX/uTW/cYq0WQI+ZLRZXNNAGnqoT64aJLXV0W9KFYTNWnvs2NV9d6OttXnvl2TOHp6NTO4TrZ6OH8ilduIIVAwTD0EWomnn48T0Y+HgmTRik1f7eWteFty4/a9MlA/cSK76Z6eduq5jrrf9YqLiVp+QIgSjiThCOJ0BQgAvlrd5k/adI5pQDANekKed9SseqhapnKeTKYHRuoIesFkXZnKK/0YQhAbWUxtmaVFz0JgyS2TdR+VAdXZRdAlIx2ek592Shk9AMAYMzKuSLoB5EZXOfn/NkOYaVYyWUb0nH3zyfysH0BsnwuI34gYQlHtQ2xM6CZDAFLfbNdkPdcr45yjSp+3PAT6XrofY1WvZAgbCZ0hdIIIJ1sBLtyRDQiMMdRkRKYfhDA3XsXsWAWPPrOMThDhE3ceVoOFJv6ZZhXdMJVBdHONVuVTdV+ayhzNqgeHibTJ050QS6tmVfLBhf6S0YmVLi6ZaaLiOjjVln2SknQio0prfUIKogRTKiCISXBmrKLY0WqQMoQXXSgqtPdbk+18q6faeQCizN5zHOWFkGRU9VzUfMfQP2nQUpAsMpYNySivBkNKRpfPiYCw/9mcgKC1PRDnSem4HibqvhHojmt7ZgdxgiCKrSwjOZnF+TUhWYYQodWL8Jv/9KBKDrC77tJvWvdd6SHIHPpOiKt/6xbcvt/cKtPIMlIBoYblrkg7Lsoy4pyjbTGElW6EMZnuan9G9xBGkYzoM1XfnF7oubQlI9uv0TOuYi1Y2HUIylTW0kUBYGZM7GtiNOyT77njyeLtR43WFcpDyO+DFicpyyXpyshykpla4zU/9x7qzfPsLCN61gjjNV/9tpuNcyQgiH9HCcfTp8QkmBcQALF6JGPLxlXnT+DRZ1bwqW89jd/47IO4UTad62krEyD1BOj1mi86p1ZcB82qq+oQqp6DZsVTmTyrQWxUJesPUh5DOLHSw86JKqYafuohaNkx9VyGkKhVIZmxM82qSrFtS6MZEOm2AIyHsReJLQqJMQFAxXVzTWVArECXjPRZcex0A5/8B12fdIs8hLGah+lmBbNjVTxxIsvcFEOQ0biiMafJum8yBE1y6sn0WTvLCBBMZN/7vpiRDkOLISx3Q9x96BT+6o7D+KLckYwWA/r3AOL5qGmr4hMrXbR6ER4+Zn6HPqHTTnVTjQo4F5lGakKzAlYgTVKzXkak7ToOg++yjGQ0ISWjdTEE1xxHKrkjhyH42gKs5jtGlpGnGIIZUG2GQOc6O5Yn24nn7IGjS4VFa3pgiRNRmFa0KU+cpM85PZ+JxmaWu5EY817+TnsJR25A0AtLCVvZ8fQcCQgpQ0hTTrMeAiAGZ9XPDwjPv2Acjz2zgpseEE3xPn77QXDOFUPYIR/EVkBdJ1NNve67aFRdeI4jNUrxwzeqrjJuO3L7Th20WrBfp4l5bqyGHc2KmXbqkYdgpoPGidijeFKuzkkymm74Knh0glgNnrwunLQyvXimoViF74occZV2qgWEiZo58ep+S951EfSJyF5hdcMYQZwow23vzrFcyShJhHHnKIYgzrdZ8TBZ9w1v49hpjSHI5nF5DGH/iRbCmONxi5HQ+U7WfbgOw0o3VEGa9hyYbPjG5EATdMVzjFUxTewnrR5NtpfSrLqK7VFwb1TczAqcPAy9yIw2/gFEoKR73JUra/IQSEJbC9RzbzGE4sI0cxI0GEKSFq3ZbexThkBMnQKC8EV0ZkrPZsKBbx3M9xHs5naU2ZRXmJYkWVM5SrgaE0uroVwECtnNlgATzYOwexlVyoCwudDT3WjrxIuKGILvGj2NdFx1/gR6UYJ7Dy/istkmHjq2jG8dPKVW7ylDED8evV71HNQrrtyK0ZGyg3igmtW0I+lqTkDYKdMYe1GCrz56Au/863vAOVd56DsnqphuVLQsI91DMCUjeze1I4urmG748FxHdSzVJSO6Hp21qA3Rx6oqhz2TdqoF1ElLmkkZQn/JKOjDEFSFr0zJu3znGPY/28oMuijhih0AaaBqVLMB4bjWRps8Hp260z2lXkgL1mRNk5zvOmoAU0bag0eX1b3QB38vipU8qZvK9G+7aZ8tXdQrrmJaaXCvqP2o7fsVROnE1OpFqi2CLg1R8CbJCEDfdNs8qEBnTWzFhWncyPgTGVcpQ3B1hpBknwsKBHaml/77dsIYu6fqqPkO3vsP9+OGP/x6pq7IrEMQPdAKm9vlSD5xwpX3uLwaol5xC7PT9CylMLYlIzsg+BuyT/QwOCcCgsPSgHB0cRV131UrVBv1ipsxlAm6bv7BN70QUw0ff3PnYUU1d1i7rPWiWFZYCobQrLrwPSY0ajnhkM4PiBUKDQQaBCTbdMMYdx5YwJcefha9KDH2odUZgm6G2qYyDXpaVZ5Y6anWyxQ8OkGsZBy1itcmbVq1zo1X1Wd914HnOrkMwZZmUoaQv+czQe+hY8sWtFqi9NG9541hpRfhWasbbSz70RDovJoVFxPWeR1f6qqFQ8oQ0ueAJiwKvJnVO1XUukwFhNNq7wXxPXmSEfkteh0CrVTtbC57Zd3wPS0giIBGv5l+z/R7TIGWPASgT0BwTX/h4Mk2PnrbAQxCIFN2HcuH61eYpgffqu+oexEniSEZ5aadWoVps+PEENLfdzWIMdXw8cvXX4nL5pp49JkVI2kDsLudJto+zjlpp1odQsoQErWYXO4KhkDeiP3bJTwNmHHCtQwnrhg+oWQIGwydIayGMRoVF4yx3PfW+jCEy3eOwXUYLp5p4EUXTuHFF03j4Ml26iHICZIYQi9M1CRUr4hWGL6UjER6qGPs0tYJIhVcKIvnvAnZKz+KVRbJcjdUNQRz41VMN/108x0tf74p93Ogoqi0LiJNg6UJJJWMIo0hpN9NUAxBbogOiIlWN/wMyciaeIkhkKlclGWkTxrZgCCOR2X91MLc9hESiyFUtN9isu6jLTdjjxOOZ5a7ylfSAzaBNG5a9ZN0RituxRAcBxM1wT70zqqAYEX6tXS1VuG6TNJVkpH5eQo69Hw2qq5iWlQNS7+n7gmYtTHi9VYvUgxL17npt5qo+er66do+d98x/OfPPzJwU6demGRSToF+hWmWZKTVZOgMwTaVSUIrZgi6ZBShUXHxzu+7DP/X6/cByMqVqpdRNET764EMQXwfXVcQJ1iRvhIgGYL8/KGFNq75v2/Gt59azPUQJmpbt43mOREQ9C6LUczVSiUPL75oGi++aDr3b1XPxY+/ZA/e+b2XgTGGKTnZZU3ldOcqkk+u3j2JF+6Zgu+K3dO6chCQjguIyZGOtXtKeBwkGXXDRB13pRupwqSZMSEZne4EqisqXV+6+5S5k5ZeF0EGnM4mKIBMyVW8PnAoIMyMVZRk5Mu0U3WfbMmoT0AozjIqrkMgk5smtL07BXN7wtL1dUMS0LKMpIcACGp/YqWLOOG4eCZtBR1EiRHYbMnoZCvAcjfEte//Ev750WfTlgNeyhAW2+Ygnqz7xnX1tO+oeUIy4pxrAcFiCJI10bk3Ki4umWlirOrhG/vFnhh5DMHYtEk+A+1epNip7iFQAsBk3ddWt2Y23aBGa3qDQx0kGdkTbJhw6/lJ5bNY+w3tNvYpQzDTTosYAo0HlVFnBQTdlCdT2bOK4QhRzBWDjLUso7omGdWkhwCIe/iZe47grR+5E70oBudQaacH5tsIY+Fv2i1TAMGEW72o74ZOGwVv8FvOflBASDg3NuPIw3t/6Kq+x/qdN16j/ntCBQQhDZHJ2eqlK3IaGB/4X64GALXPwWoQCcmoaktG4rO7p+vAITFxVmQGCk2EK91IPeyTdR/TjQoSLh5CfbXVtExlCggTGkMgFpCmqApTuVFxla+gZzgdW+piuuGj6rmKEVU8Rxn3gG0qe1jpRUjkAKNzmG72zzIK+jCEZUsymh2rYKLm4cl5MyDQdxJUllHVNRrvUWHXxZIhUMWyvlKjCUt5CO0eDp1sY2k1xKGTHeUNeI6D8ZqPp0914LkMjKUbI03UfSRST3blvaDgSZ12e1GinoFT7UC9F0i1/Imaj2eXe2hUPLgOw0sunsbXHhcBgQJtUSW8Ygjd1EOoeq6a7PtJRuQNrHQjnJcmmWVgB1N1D4s2yLGqwmu+q9iRzRAAMppTP4GuiZ6ZWflMGx5CEGPXlLjHeoq1Dl22ocVEEUNIeJqFFseiLxjn6bGDWGwcRYuQIBIMIZKFp3raKQX+bigYq82uxmseEi6uQZ8vNgPnBENQBTHU1rYPQ1gLJutiv9NOICZ+kmio42kvZ2CoCt0gFgxBTgTnT9RElpEcmLumBDOYavhy9ZhKRivdEMurERwmJn1aFZ7qBIaHULcCAg3scVn/AKTNzdIU1QgdKat5rpCC9JXUgfkWLpO5/7Rnbs13jbRBWzLiPNX9bVO5SDIyi3XMAUn0mQIwYwznT9YyK2raBpGgMwS67oV2oLq37paZZ9Rqwm5dAaSdRRdagcrs6UWJquj1NQ9hsRPiCslear6jpB614tYlI7na7IWJtp8wN2QnuifEjIjVvezSHeo95M30Ywicc7QCkXZK94Um1TxTOYjNiXcwQ8iXjByHwWHZ3cY4h7FIq/mukk5FHYKjPg9kC8HswrSphp+pVu4EsXrGaxVxvCxDMNO8qdtpXpaVqFSmljgpS9Azq+qaZKS3L4mT/ICwGsQFaafUvmLzfYRzIiDQgxRzblDQ9YJWmfMrPVR9YVTWfAf75UpVeAimQU26bCcQhU9k5l55/jh6UdolcbfcyWqq4Ut92ZSMaOtLxpja5GSxHSBK0lbCFdeB6zC1CqeBU/UdtZIhmasiC/JShpDSa10yOnCyjctmRXX0G1+8B//tJ16EHc2KEWT1yUBJM1qQBERQ8l1W2LrCrENIcHihjcOyUG/FkozEdVSVrk+IE+QGhEbFVczo5EpPrUYvmBQBgdp4+zmSEUl1C60ejp6mgBAjioUJyZhgisvdEKc7AV6wewIVz8GY9I/E9aTat+4hAMKv0bO69CBH92RCSUbi+l9uBISqugZCx+qv1QmEZNHMMZXpd6I6BP17A41d9INgxvmJGXbXWF1qI9ACCChmCPq/7cK0iudgouYZHgJ5h0C6is+kcmv/3w1juA6GYgiJltWlt7+p+15uQEgSbpjKKiCEidHoj5B2PN18H+GcCAj6g5SX1jUqJrVsHdFzxMHrnn8evvDAMwhjsVNYNhdb/H+7F0kPQTxAlMFEK9B9uybgOQyXzjZVBgqtzFrdSFWT6uexZElGjDFMN3ycXBGTGD3wVc9VqyWSfQChPy+0eugE6eCp+mkK4HI3xPxKD8+TJu5kw8e/vkbsOayv8Iy+QdaeCDQIqVivsHWFJRn9xmcfxK//jwfE9VOXUD0gjFWMXcQAkaFimMrKbPfUfr8n24FKISVWJlb8dvtrKjiSnUWDWElUtsQ0ITdGP9UOMNOs4JKZBhoVLzVpSYLRJk59HwD9ntBvB6SBhJgR/UZX75lUwY4665obMGlZRlGSuX9Vy1Qer3rGLnOBdr4A0Or1n5iEFJY/xuy8fhUQdMnRd1RQjJPUE3O1hZ1+P1KGQD24nEwygzCVxfXWiiSjTEBwZJZRNu021uoQIlnfox8bAOoVJzWVI661dBe1HbTgoIXMapjPEF5z1U7c9quvwSVyIbaZOCc8BJ1qRsnGSUZkzs7LgAAAP/rCXfj8/cdx+/6TfSWj1UDkoP/w1ReAMYY5KWHQCvTK88Zx/2/9ABoV0fisFyZqZbbcDcVeyHJiINlJZM2YK9uLZ5o4fEqsrFUnRVkIBaQMAUhbfIdxui2myI8XA+XAvDjO86RkpIPuadVzjAwue5Mcvc+N2LazqFLZlIyWVkM1Ma10Q9R8xxg4s2PVrGRUwBDqFRc7GhUwJhjC6U6AiZqnVsx0TkZhmpywdF/v/iOi4KwXit43dA/Ga0Im60UJphoVvOTiaRxZXFWLgVDTvskH0XcK01euBkOw6kjod696Ll504RTuP7KkvAjTQzCzjOy0XcNUllXKADKmsp6y2g9FHgIgAoL92wIwVsVFWUZ2G/tEMYTEOJZgCGnufpKI/T3SQkrBhvNMZd9lykPoxxAirbAs4Vydk84QGhVPyUoGQ+DcYBh0fOUh5JjKY5vsHRCGWiozxm5gjD3GGNvPGHtvwXvexBh7mDH2EGPsk9rrv8sYe1D+82bt9b+Vx3yQMfZxxpifd9yNADGEhLJw+pjKawENzGeXu2ql9+or5zBe83DjfcfQDeNMTQNNGp1QeAjPv2AC777+CpUBQZpxVZOTbIYgJKNINY9TbSdkvxp9orx4poHDC6JoqacXyvlZhjA3VsV8q4d2kFax6pLRk7IamBrqGdflpAEh7x4tWwyh6rtil7bCOoS0K2sYi0mSjiFy6M3HZaZZwUo3MrrCJn08BM91MN2o4GRLSEaz41U1EOk+Gx6Clz0OVZj3InMg61LWjmYF73/Dd+Hj73ip+jtNXLqkqDOEbhir+2kEhMT0EOpaDcu/fcXF+LEX7c58B2CayjpDGM8pTFvWmCdN0rQ9KD0/o3oIgGDIccJxz6FT+Nx9x1IzPqdSmVsSr2d5CHZhWrofMcNE3VPPC038eop3zXezASGIjZTsfvs4m5XGXBnc9p4qxR5CthvsqkyD3igFYxQM/GbGmAvgwwB+CMA+AG9ljO2z3rMXwK8BeBXn/AUAflG+/noALwZwLYCXA/gVxhjlJ/wtgKsAXA2gDuBnNuKC8qDvmKZ3Vlwv6OHpBLEa0FXPxQ0vOB+3PPQsTraCzARJP3acmL1yaIJebAeoeE6moKrVjdSDv9KNTIYgJ2+SmyraauuSmSaOL3WVYQWIB1FJRjkMYVWTjPT8+CfnW/AcllvlTUHWbvthS0YGQ+gjGYVxojKfoljUj9DKVMhl5oqJUg1PtU2JxbXuI5BODLNjFSy0Apxs9TDbrKoBSkxMX7XqiwjKRtInJD1ZQc/imm74ckWapuYakpFvegidIMZqGGNmrIKK6xjFaeRT0PmT3AgIZvpf/s3VRlYLwfYQVJfVSioZ9QyGkL4OZBnCujwEWW38F7cfwu/d/Fim/7+4FyI1O12pp91OAW2HMm4FhCg9lmAIkbz+bECwvbEkEW3iaUwBVKlcsEGOVmkcc55rKtt1CNQ6RJnKllKxGsbGfiZnAsN888sA7OecH+CcBwA+DeAN1nveCeDDnPNFAOCcn5Cv7wPwdc55xDlvA7gfwA3yPTdxCQDfArBn/ZeTD1djCJGmSa4X+mpCHwA/9apL0Q4iPHWqk2MqZ41KIH1YT3VC1KwHoua7hj5O++PakhFVxtoMARB7QKS9lUQ1tusw4xrmxqs42Qqw0tX1VketpA7Mt3HRTCN3BeO6/RkCpXamPoYoyrNXaYQgTlTQEgwhwUovQpxwwz8hUGDTjeUk4dBjP/0WTZWuWpUMoYfZ8YqalGjC1CuV9eBga7m2h6AzhCmt9bnd/16XFGnf4XYvUvLG3HjVaF9BcicFj7qfnXRtIxgwV/SBLhkVtK5IGUJ+2ukghtBPMvKlqdwOInTDON3VTpeM1DaasWQI4lh6k0pAL0xLK4x9Vxj7ev0LLTp0RlWvmIsRyu4b154r1xEBLG8LzURrbREnSYGpbKad6gyBc8GK9PpY6tC6rRkCgN0Antb+/4h8TccVAK5gjN3OGLuTMXaDfP0+ADcwxhqMsVkArwFwof5BKRW9HcAXR7mAYeBZHsJG3XB9JaivDPbtmsBbXiouM8sQtOpXTYaoa5O6LTNVPVelRgLEECJMSg/Dc0VKIzEEnX5fKievQwvtdAUlV+c7mhWDicyNVREnHM8udy2GID735Hwr1z8A0kpe+3qbFRcXzzTwD/ceERO7nCwYY5iopxXWNsI41Xz1TctbvSg/IEjpS5dY4qRIMkrlMpKMZnSGICUWs7ldliEQemFstB0fr+kMIRsQSILpapIRTc6tXoRVKTXOjlWMamUKOhSsG5WsrkxBrWcwhJTx9XTJSAYhu1LZDgh2YdrKOiQjagVBfbvSlh+6qZz6KbkMwU471RiC2mlM66HVCdOd6Qh1SzKi4DChBXPXceC6xZXK5DHESRqcjIBQcQ0JTwUE6SE4DIZa0ZUsvuJtzIJ1FGxUKPIA7AXwagBvBfDnjLEpzvktAG4C8E0AnwJwBwB7SfgnECzitrwDM8Z+ljF2D2Psnvn5+ZFOTu9ltJGSUc131QRoT4S//ANXYrzmGStEwJxYdBmCHqRT7SCToVHzHWMQnmoHWA1j4+Edq3rKf9Cp6MU7REA4vNA25Jrd03U8z/ICaD/gKOFqFU2SURQnOLzQyfUPgHSw2oyIMYb/9Pp9eOJEC395+yH0NF9l91RdpW7a0I3tKMOQXYUAACAASURBVEmN1uXVMDcgUMW1zhDEZJLNfmpoBW3PLHextBpidkwPCGIiMSQj7b/nxqsqqDhMMoQobXWhMwR9P+10D149y4gyn8TxVnpi5VyvuILBrJhpp3q7k0Y1yxBoMjGyjHqRCky9KNE25nHlfXH7MwS7DmGQZBT2MZVl98/VUKTXUnDUn9l017TY6mUk/q7qECzJSHT6TTO9umGi0myBrL6v1x1QcJi0GILLsh4CeRuu48i/pwyhZjGE4rRTLusc0vu0Gp4FHgKAozBX9XvkazqOALiRcx5yzg8CeBwiQIBz/gHO+bWc8+sBMPk3AABj7DcBzAF4d9GXc84/wjm/jnN+3dzc3DDXlIHeZVG0dti4G06ZRvZEODtWxc2/+H34pev3mudSkK9Pg3ylG2X2Y9AfMs9hahLVGUqj4ilZRn+gJhs+phs+Dp7sGAzhN39kHz7+jpca3zM3nhrMFKBqvotelOD4UhdBnODSmfyAQJNdXrrhv9p3Hl571U784Zcfx6lOqAb8nuk6llbD3E6Ooaz0BCj7gqv7s7QaZgJtHkNI5CqOcNGOBnaOV9W1zY5VFfsxJaNskz49LXKs5qnv2zPdEHUIWgW8rkPnSUahNsHS/TIloxg1TwQEQzKSixma2Bo5klFF7kGgb6PZDiKjx5GddkqSUS8SzRVpUrQ3z1F1CAUMQWw+1RvoIRBDCKJ0K1m7ME2cq5CUbIZAaaAZhqCn/tbTYi5a/TesyXpV81boOdDHlEvdTq3KaooPLqP22NDSTm0PQcsyijWGkIjsR30+WA3jTF+nrcYw33w3gL2MsUsZYxUAbwFwo/Wez0KwA0hp6AoABxhjLmNsRr5+DYBrANwi//9nAPwggLdyzrOJvhsI0rdjJRltHCWzB4+OXVN1Qz4ArMwVNysZAdntO/WH7LyJGp6Rje30iadZ9VQfGvuBunimmWEIVc/NSA56QKDVY913sBrEyhSeLugS6xV4CIR/+/KL0A5ifPvwYsoQZGUwVfzqCKJEnZ++Ij3dER7HhMUQmhXB1nSvJbaa2/34dXtw+3tfqyYYYhWAKGyjbTaJjRl+j3ZdY1VR6ew5DBfuqKsNdeg9xBDGql7BvswcnHOptadZRg4T19oNhX8y3axgqRMaDfR818GLLpzCv3r+ebjqgmz/CD+HIXR6sfrdyEOouI76bpKM9Cpl/fpVVtQAhvA7X3gUP/LH38BqGBdnGTnCQ6BVO7EVY8c0xRAS1aZCfJbGMeS/TQ8hiNJEDRoby6uh+o6G7SFYdQf6tQMiAOVlGam9uh0xqSecD2QIuocQxfmS0SqljW/ngMA5jwC8C8DNAB4B8BnO+UOMsfczxn5Uvu1mAAuMsYcBfBXAezjnCwB8ALfJ1z8C4G3yeADwZwDOA3AHY+w7jLH3beiVaaBJIeYcUZwYMsJ6oQJCQSGODc8ICFnJCDADAACDMeyaqqmHb6KuS0aukox8azBeIlNPaUDbec6EnVpAsFNe86qDjesqkIwIV++ZBAAcPb2qMQShxecFhChJJSN9oxNiR7ZkxBjL1CLYHgJjLFO7QJgbTxv1paayLu+lxxmveZgbq+KCqRrqvodeKE1l+Z6a3B1PbyIIQNOTk8y+w7R3buohOJisiw119A3nfZdh50QNH/3J6zL3wP4OQjuIsEOeC2UZ6UV9dA5UBDdhpZ2GlmSky5dPn+qo3+Rkq4fjS13pjRRLRrEuAVJGl3Z/05qMWEkzQCr9kuSWl3ZKvxmNjeVupCb+uu0haKYyvUdfZLkOcj0EMplpR7VIk4yEPwb1fWpPCc1DSLgmGVGbGelp6NdwJjBUtQPn/CYIL0B/7X3af3MI2efd1nu6EJlGecfcsqI4I+00MbfrWy9ShpA/EdrI23QFMFcvWYaQ/r9or7BofDcgGEJe2ikgsmL+6b5jWOmGqFiFYzqaVQ+NimuYkOQhLFv9g2wU1SEQdo7XcMFkDceXuoaHAKSbu+gItSwjvRDq6cX8gADIauWWxRD6/NZ6Dcas1qivnccQtP8eq/r4lR+8EovtAH9952EpbXjGe8ZrnmEoA+liIIp57kb04zUfLZKMfFeb1MRmK8PInb4l8wBCApvSmt61epGRskrnQPIUBYR0MjMrgfVK5V/+7/dhvOrhY+94qbFFZ/+005QhtPIYgpKMEqNjbVHrCrpWvVOozhDou/Rr1pMlAM1U1hZZottpNstIMQSWbrFJr3mOSDEOoiTby8hIOxWLALqmC6ZqiuHb43crceZC0RYirXAc3P56rZhYI0PQV+d27jXBHkz6pHGB3DAHyEpGaWGOeS6XzjbBOfDo8RXVcrcIJBsZDCFMVApfEUNQaacFmwsBogW4OKY4h9mxCqqeozZ30UGDW+zElk5AR+QWqLkBoVnBQltjCHIVVgRDMpIBoeo5aWGadt9d2ZgNEFLQFeeN4+WXzaDqiVbNQWymM4uEAvMcVaGXbGsi7kV6v5pVV0pGMiDUzBqOcIiECNVuIk7wzr++B3/8lSfQDsTeB7R3sl3YR9dJNQ+2ZETBK08yOrq4mqkxAYrHg+eIyZJW5NQI0jMYgm4qax6Cmx8QdIZAkhmNy+VuGhAavi4ZOWaWUY5kJLqdOhmGoPbqdpgylYm16C0/itJO6XgOS2XEXZN1dR+3tWT0XEDaAwWyDmEDTeU6bRIzHEMwJSNTzkiNXDvLKD32zgktIGgP75jGMOwH6srzRZ+kB48tDQxcc2oHNfNcKP2xiCH4SjIqPv41Ujaie8UYw57p/EyjQGryvutYDKE4IMyOVTN1CP2C/6zq1pp2na24Tm6lMpD+duOG3OLKlaxpBr79lZfgx68zMqyN1aJeNU5QklEQo+67mSrvKB5sOOr9h7518BQ+fffTqpEdtaho9ULV6VT/zLPSmyJmo8tPYu/w1FTmnMutXHsqh78Xxdi7cwwTNU+xPxuey4wNe1o5bExv4xHpWUZsAEOI8xhCpMzjfpJRN0cyom6nnMPoeEptKmgDHZ0huJpRXNdM5UCTCYlxOYypcXP+ZE0FijMZEM6JXkYp1ZTm36ZIRsP9iGYdgvkZKtQqMpWbFdeYCPWHV09BtB+oy2bH4DkMK90I4xrDyAMxBKLX5F/Q6nGsiCEMkIwA4Oo9U8b1AMDu6UY+Q5CD23OZERCO9JWMREDgnIMxIU3U+zCEmu9iTO6vTDJaxXOUdGJrub7DEABGXxlR5RtnulT+b99zaeb7VGGaLhkZDMHDcjdCNxI1GPoqF0gL0/qBZI5elGC5G6pVZ7Piouq70kOIjQQCus5jMjATc6JjiT2muTpOW27kRJOcvsn9nuk6/v9f+J6+3U7133Olmw0IZtrpEN1OtS000yyjVG7rBKIViP57kmZPz0oqGdkMIZWbK1ZzvTQgJGoy95zUp6r7rprwRdqp7M8UpwyBjq8zf3te2EqcWwwhwSaYymaZ/yAU6dJAuoKxGQJNGmM1T61OfZcZ79MnKTuLquI5qqBskGFlS0Z0TidWukbWRPa6KCAMloz09+yZrhd6CJ6k33rbX8qwymcIFbFVoVx12r2M8jA7VlFtL8R1pO2Z7WulgapvUlL1RduHcAjmqfZs1iQj00Pw1M53ZCoDumQ0XEqi7zo41QqMRnyNis4QIuMa6Jk4vtRVrbr1Y1FKKpBKa61epGokFEOQhXb9ngFamBDSjK4cUzmKDQ/BKQgICRfjWjdk674Lz2HKQ9DZAWBuSAToprJemMa0uSO9mYmSfGTaKTcZgi4Z6UE17XYq/s2kqdywFnqlh7DJSH/UZOPTTqkOoY92rkMf0LYkoSSjgjqEsWoaEPRVLWBOUnmTxhVSNhokbRVJRidWeobhZkMVpvWRpHY0K3j+BRPGamjPdB2LnTCzTy8VGdkMgSY5W5+n4wNpcZqddpqHV10+i1deNqP+Xw+YdvD0HAdVzzFer3qu0oeLsrfs40UxV4amLRnRJCs8BLnKlX39RUri4GdXZzmEZtVTwUt4CKbsBQiGMNusGM8Vdf8kWYbucasbKRkxZQjZdu82REBIA3yegU/PfzdMEMdplpHd3C7WIh61D6HfgCrhl7uh0ZuLYO+alleHQJO5+M7UH9EZgmO1rvBcBt9lkt066tr0e0jFeK48/lTdN1SBUjLaZLgsfZA2sv01kK5U7f5DRSjKMgLSSThTqSyPPVb1lExka/mDAsJV54/jc/cNZghX75nETLOS6utycJ5c6WVqKvKuaxBT+vufe6VxfqQ1Hz29iivOE0GLc67ysT3HwWInNTuXVkOjfbcOYjXUzG1QlhGQbm1KMNtVmJ+l3dB00PWSLNEP+haSel8pQrPqqb0LarpkpHkI3hBtkH3XwYmVrrqeIErQrLqKIbR7kXEddM1HT6/iEqvwkGoUaCVNclKrFykZkfT3fumm6h64ZvtrCvb2nsp0XH0BZ++HoK/aezIo688WbZLDkW3zoe+rPI2UIej3xStgCHqWEdUppAxB+F46I/FdIUnRIVIPQVz3VKNipJ1v9+Z2Zz0cR+xtSz/cRrW/BoBJMpVHYAj2hKMkoyKGUEv7oo9bkslYVX+gshPTlecRQ+h/7a++cifu/U/Xp60rlGTUK8wwAvS00/73oWkVa1Etwm989kH83d1PAdD62rvMaLx23oQIUnlyEZBOJHoTsUEBwYa+yq+62caEdl96up+tbjRQ+9X3F0g9BE0y0o5N8lyj4hpZRsMwhKqXdkn9vr2zAMSEWPUdtGU31WYlayqvdCO1taj+Nz1DhhjCSjdSNR9pFlJxhTLBHnutAg+ByZYgRpaRyhak9tfpqp12udOfLdokZzWIMo0A1ZaxKpgJI1///R0tLTQ3IDgMjmxtEelBwnUyE7zOgOm8Hcawe6qOy3eOWQGkDAibDjLagOxEvB5cPjeGa/ZMYl9O1Wge+noIWrsIHXmS0YQ1OTf7ZBkBaabRWlcfFJxavagwwwgAXPmdaz3+C3ZN4N+8aDcOzLfxO194FIC2i5ZMOyXslL2WCgOCawUEzo3mfcPAYAhWYPVcljHV6bcJtMK0wmNrlb95WUY6y6Pj6hu96O0x+sF3mZLN/t0rL8GPvHAXrjp/HFXPxaKs5B7LYQgA1Nai6lieYwQw8hBWuqERECgLyfa/bNgsSnkI2nUxxoRZH8ZmlpElGYmOo5DnkO0USveukyMZ2bumLXVCjNc8WbyYfp+rtb0h6AHBVQxBTztlxvfpmWsAEErJiDGGP3zztfjQm15oBpAyIGw+HMYUtd1QU7nh48Z3fQ8u35nfBdSGK9kKkCcZpS2ndaQN0Dwl22RaNwyQjPZM1zFW9YY2vwn6ufRjCEXdTgcf38Xvv/lavPElu5VcEmmmrm7UUiV1UUCgia2nMYS1NjIsqhMBxKRVxBDofPtB6dEFkpE+SdflRvCijXMkP8eHykCpeGnu/FXnj+OP3/oilXZK+0WMW5lSBNpaVL8mXeKiNuOtXmRUhfeiRLbzHsAQrMWYaiRoBd+qJ7KAEg7NQxD/pkKxKEnbmwRRkqnypU1y8kxle1/lk61eWpwof0cx4UN+l2YqW5XKIp1d9xBMSdP3rICQ6JKRk31/6SFsPjyHqVXZRjKEUUCZG0WSkS0/0cMyLuWWques2UNgjOFHr92FPdP5+eFF0AdSPw9hmLTTfmhWPARRorJFADGQ9IyLuQEBgSYjQzIaYCrboAmFBquOPdN1ow4EMH+rQVlGtBjIa10BmJliiiHUPSUZDcNCAPP3t1u0U+FeXusKAJi1GIKQjFJDdFbLMtI37+kEIkV0sIdg/l15CNYireank2i6p7L4mzKVE+G7tXqR8hD054U2yfEcR8mNBFsyEntiiPf4ngMEcR+GIP6dVionBmuYyDGJDclIpZ2m52p7DmcK50xAcBymVjkb1f56VJAua6/2iiUjaSrLQfyeH7wSL7po2niP4SEUTEy/bRmow0D3M/plGdEkNKyXYoModke2AAaEh5BmajDVfmEQQ6CAkiSjS0Z5QfX/e/tLMm0/9AlwULogk4VIYZLWIei/dW5AqPkq1XbYKnu6hppvrjwrrqOyafLSTgFkPATfMpXp7yvdCPNaESAFrWGyjHTk7U4nzt1Vk2jxfggJxurUo0lmGWnXsnuqjpOtHjq9CPt2mZKunWV0shXgeZLlK4bAWCazSXxv2twulYzSOoT/58e+yyhk823JSDOV7fMByjqELYHnMDUYNrJSeaRzkQ+/PXGrVE/PXi2J12kQ/8z3XoaXXGwGBIMhbOAGG/qE0tdDWC9DoA3ue3Guh1DzXSPlNg+pZCQGecxHl4zyvBDPdTImtX69wzxXvssQRkm6c1xBLQlNEJRZBQxfZU/BjKro1bkWBB8zINgMQZwvMYTxmg/fZaoOgW4HZUKt1VSmOgM70NY8V7UhtyuV9T2VaRe0PA/hba+4GOMycytTh6BlGXHOMd/qqZRrOoZZh5Aa2AlPM4pcy1Qmo/hCbROlisuMfa0paUK/5tJD2GK4jqMmiu0gGen/JqSFaebDO9Xw8by5Jl6wa7LwmIMko1GhP6h9PYQh006LQMGwHURpj3zX0Va7aRrmQIagSUajMoRhB6U+AQ5z3z3XKZSMck1lbSvIMOZDnRcds6jbKmD+lvrrM83+HkLVczBe83FyRWw9KpotagxhiLRT9b192NjMWAXHlkTldLaXUcoA1U5wIdUYpceablbw868V+5E0bQ9B22tjpRchiBKjwSF9r91yG0gDkutQ91aNIeTMLVnJKDWg7fPRv/9M4BwKCGnxyUaayqOAdOCMh1AgGVU9F1/55Vfj+68o3iBIzzLaSElMX8H2ZwhSMhqyp5MNVUPQi41dtOha6lqhVrGHkA0Io3oIww5K/f4Ms9DwXVMyKpqk61pAWOlFYj/wOBnqt7U3ick71yLJaNZmCJZkVPEcfPfzZnDTA8fRixJVR0KZUMNKRr7LFEvJm0QvnmniqYWO8Rl7ctZbpOc1JASAf/fdF+P7r5jDdZfsMF7XJSMqBiRDvZLDEIzCNI0NZNJOc34f33WM/k25klGZdrq18LYTQyhYGdnVwWuB64hWFrTJ+EZB7+/ejyFQfrqtQQ8LWsF1NIagZxnVfEeZ2mvJMhq1DmHYQbmWLCPxnlQyor2lCc0cyWii5oFzkZ4ZxnwoyYiuYapezBCKJKMdOQwhiExG8xMvu0hlhFGSwtLQklEa4Kt92NjFMw1tkiWj32IInKv7VNSQsOq5+Kuffhl+8AXnG6+nhWmJqrimYEiSq6v1JdJbZSdapbJoXcER5+z8RvA9B5qlgCBHMqp5a1tYbBbOmYDgOOlEsZGFaaOAHjJ7NVPXWk6PgrGqt+GrC8aYMpb7ZRm9+KIpfOWXv19VG68VtM9xJzA9BL0vzOVzY3jhhVO49qKp3GPYG8yPFBDWyhC0CXAYw9d3RUqoSNE0v8NoJ+Gbq/zl1RBhks1My/0OeVw7cBb5FXTfJut+5rorGYnLxSufN4NLZoRGvjsTEIbLMqpXXPWc5zIETYO3GYLuISiG0M1nCEWg81wNY5U+O2t7CIzhqgvE8/ztw4vqs3qlsuMwRDEHFV/nPW92kIoUQ0jf62nPeukhbAE8x1E5xxvZumK0cyHabN7+7907i594+UWZ9gHDorkJAQFIGUu/LCPGmGqgNwp0DyEwGIL0JnwXkw0f//QfXlX4PRnJaIjmdjY2myF4LlPN7eyMrKpHDC89Lk3qpzshOB9uMaMYguUhUOFeo+JmdpKreE4uu6t4Zh8e2mDpJ15+EQCoZ5VqJQYtZuh7G5W0Jibvvl0008h8Rt8PIUk4OE8XEnnbnvaD44h2892cgKBLRhdM1rF35xi+/sQ8umGM377pETyz1FV/dxltoUmLzZyAYCV56Psh6KBxVtYhbAFcJy1MO9OSUWqmmedx3kRtpNRQQrPiGdkMGwUxyMO+DGG9UGmnvRjjNekheExNgMOwJsZEU7E07RR9N8jJw2Z7CBXXEYVpOX1/GGNoVsWWnCQnkG9D9QPDZJDRBJRlCGa2mvE318nUIAAiMK2GcaY760+96lI8/4IJVRsyLEOge1TTJKO8CfBibVGk6hC0/RCon1HDH40hAHJf5SDGvMyWIrlMN5UB4PuumMPf3HkYH7/9ID7y9QP40RfuUn+nLTYHeQg6whyGQOezPEQLlM3EOcMQXMa2jalcxBDWi7Gqtymtc0lv7echrBdkireDSFFq33XU5FYfdkc6z1EFiHrbg2GRZhkN9zlDMhriuaLmbnmSESB+Q91gpEmdWlH4a2AIk41sXyLArFJWfytgCNPNCk53AnVPdQb1vXvn1PUrU3nItNNGJQ0Ieb/RWNVTjfQyvYy0Te1TU5l2Gxv+96Y9EU62etjRrKjj0zU6WkAIogR/8KXHAQCnZfBziCEk3Ng0x0ZRQLDXKjTOzqRkdE4xBGUqn+HCtKK00/WiWXU3ZXVR9V0wZu7KttGgDX5sD2EtDAEQK9QgFrnlCcea006L/J1+32d/dtDxwziBw/Inz7GqZxQ1kUxHDGFYnwIo9hDyNjl69ZU7cd0l05nXdzQqCGOOhXYPFdfJ3E+SOZaHLUyj3cT8VLYqum8X7WjgZCvI9DKK4zQgkO9GpvJa0p5rvmA/pzuJkV1F50Pf+/JLdxhNFk93RHBW3U652dzOhn19eZXK4nxc+f6yUnnT4WqtK850YVrFc4zdkjYKF+5obJJkJHr4rHVyXQsqsuirE0QqC8OXfV4AZLpVFh5HDlyaU0dNOx3NQxgy7TSmKuV8hhBoSe9kKi/IHkRDZRlRHUJBlpHdjwkAPvSmF+Yea1rKKM8s9XKDpGIIw5rKWoCnSym6b5fMNPHtp04rRm+3sQfEtboOy915bRDqFRfdIMZCOzB2kKNFFU3YNd/Fa66cw7PLPTzx7AoWKSA4cj8EGaAclr8AsdlmkFOHQOez0VmCa8U5FRACVfB05k3lzTCOfv2Hn2/0XNko1LUN3zcLjIkOke1ejFDL0dc152FQ8RzVNhlY+2/dLxWy6Lyr8juHTTvthQmSJL8R3HSzYrRJGKt4cB2m+gatpZdREUPI8xCKQIVqx5dWcyd7CmprTTttVNzCJo8EMpb1HdMcJtI+iUV5jrj/IwUETTK6dDb1LHRTmfBHb30ROAde88FbcbojJSONIYiq+PzvLmYI2fM5k4YyMKSHwBi7gTH2GGNsP2PsvQXveRNj7GHG2EOMsU9qr/8uY+xB+c+btdcvZYzdJY/5d4yx0RLYh4SuUw6jw24m9HTKjUTNd9c02IfFeM3DdHNzAwIgfISOUamcblg+dECQefN68dBaoDTyNUgPSgtfA0Mo2l3sff96Hz7449eo/3cchrmxKo7KvaSHmTAUQ8hUKqdNEodFyhC6uQEh9RCGk2x0yWjQfbtYBgR9YnYd08R1HJEhdVTuB92sDp+yXfNdnJAV17Oaf0K+lT5nVD1X7mLn/8/2zj5Yzrq645/vPnv33twklxASkpAEQiARA01DGlBBXkSBgC9BsTSMtfgKVpkpg1KhOPhSO1O01tYpU4ojU+posVrFOINA28HSKigReQ2CMYiERhIi8k5yb+7pH8/v2X325u69+3b3efbu+czs3N3ffXb3PC/7nN8553fOqWrqU0jVMqp1WxlrWVViCPsnpuZeIUiKgGuAs4BVwPmSVo3ZZgVwBXCimR0NXBLG3wysBdYArwE+JimpMnU18EUzOxJ4Bnh/W/aoBmlTLmsLoS8qZLqSoFE+vv4orj539eQbtshgf9zAPR1DSH4g9Sbr9fcFhVBOHmpMhuTH29/AG/sbCAYWCwX2ThBUXjp3kCMPrs7lWDDUz/agEOq5dg+Y0UexoPLNvCxnsXYMoRaJhbD7xb3juozi5C3VXdwuubmn8xBq3QTXLD2QA2b0VdUFigohiDvGQtj1/B6WHTTI7y8ZP0dlPNYfs5CtO1/gleHxYwjjuX/SS6/LFsKoxYUH67AQigUxXGOyMlDqAoUAHA9sNbNtZrYXuBHYMGabDwLXmNkzAGa2M4yvAu4wsxEzexG4H1ivWDWeBnwrbHcDcE5ruzIxaW2fdbXTvkiZBo4aZfn8WRPWUWoXg2EZ4N5yx7S0QqjfQtjTgoXQTMC/MtOtZ/YuRvaN8srw5N3FEg4eGihXPK1HrnPXLuGmj5y4n5svuaE3YkWmlUotefuLUfl4T6YUyzGhOhTC4fNmct8nz6hy5xQLcWJfWeGHHAqAD596ZEPxwXe95jA+8eZXA4wpRld79VN66XXSMW0kNMipFRNML2WOCkolplVvt2hooCqWkQX1XBmLgSdSr7cTz/bTrASQ9EMgAj5lZrcA9wGflPQFYBB4A7AFOAj4nZmNpD5zcbM7UQ9RlYWQrRY+ZvEB5cxPp8JgqciLe0Z4OdR96e+rVDttNKicnkE2QqN5CFBRCPUo+WKhkvlb74qYhUMDDe3PjFLEMYv3V+BlC6EBhTCzFJVzO2odk7h3QSzbZL+tcgyhL0pZgvWfo4JCHkJqmeeMvojFc2ZwzrGN30I+cNJyzjx6IYfMqfQJGRtUTpPuVJj0QxhNerXXODflwo9RgX1mNfMQPnrGq/jIcPsXhTRCuxzORWAFcCqwBLhD0u+Z2W2SjgN+BOwC7gQa2mNJFwIXAhx66KFNCxjlyEL4wEnL+cBJmYqQS2aWIp5+YS+7Qv/mgZRPtW6XUTHidy8PVzUsaYTmFMLEM900cQwhuIzq3Kd0c5dWXI3JPjWSTyKJuTNL/Oa58WMIUNn/eqy4cgyhFFEYCb76BiZoxahQlZgWFcQn3ryK2QPFhs5ZmrR1AOMHlRPSBQMLhfheMhJcWLWutbLVWSxA6P0M4+QhlKL9ynR3mnqO4JPA0tTrJWEszXZgk5kNm9ljwKPECgIz+yszW2NmpwMK/9sNzJFUnOAzCe+/zszWmdm6+fNrV/ucjKqgcsYWgjM+nIMOJQAAFFJJREFUg/1FXtw7ws7n97AgdCZrapXR8L6qNoeN0GjpCqj4zesubrcvKW5Xv8uo/P4WFkQk39eIhQAVt1EtBZaM12PxJH72GaWJi9vVIu2igfimffLK+fs1jGqFRLGM7zJKVxWO8zJGQ6JcbQuhsp+FgsqVUxt1Z3aCes7E3cCKsCqoBGwENo3Z5iZi6wBJ84hdSNskRZIOCuOrgdXAbWZmwO3AO8P7LwC+2+K+TEh1Ian8nQgndiO8tGcfTz33Srl/crHRGEIo1zzSpMuovwWXUb2lqZ9+YQ8v7t1X3sfJWJBSCK1cu8vmDXLKyvn7NVeajLlhhVmtG3eiaOpTCBUXYLm4XQPnKA7ijpZLYE+FtZ9MQsYNKg/UthBqTT7STZeKBTE80pz12gkmnSqY2Yiki4FbieMD15vZQ5I+A2w2s03hf2dI2kLsErrMzHZLGgD+Jyyveg7441Tc4OPAjZI+C/wM+Eq7dy5N+oeUtcvIGZ+Z/fGy053P72FduGk1mpjWnySmtRhUbqQESHJDrEeJFCMxanDcsgO54IRldX3+wrSF0IJCGCwVueF9xzf8vrmhxlHtoHKwEOpyGRWCLFE5ebARl1HcsrLSn2AqkiUPmTOD2f3Fqh4jCWmXURT6IZjB8AQWQtoN+fKwapauyAN12Y5mdjNw85ixq1LPDbg0PNLbvEK80mi8z9xGvIKpI6TrF2UdVHbGZ7AULzvdMzJadpM06jJKksQ6G0Oo30JYs3QO6w47kOveva7ufUrHELIo3T435DNMFFSG+iyEZL3/ogNm8Ovfxg1wGlG+Y5vaT8Xk7uxjFnHqqw4e15+fdhmlO6rtHdk3aQyhFMUtN4dz7DLqnUzl1LF3CyGfzOwvlpN8yi6jcqmD+m6E5cQ0a04hJDfpRjq/JTPjegK+G9YsZsOaxlbDJH0KxvYM7hQVC6F1l9Hy+bP4yV+8kYOHBtj5fFKfqYGgckHss1TXsin4LRcKqhlnqXYZqfz9e0ZGJ8hDCEXzystOm7NeO0HPTJXTFoIHlfPJYGpGllgIr1k+l43HLa278U5/X1S17LRRhTB3ZonPv3M1b1m9qO73JN2upioDXlLZbZRF/ko5hjCphdBYkLyZ+v+dsBAmIu0yKlZZCLXzEPpSVmdU0LgtNPNCz9wZk2tOU1BUzmkPVQohWAjzZvXz1+eubjAxbV9VV6tG+cN1SzloVv0JQpVVRlN3XSVuoyzcneVVRhMkpsHkWcpjaaa6ZzLDbuX8tkLaZVSQym6lZ18erp2YFlVcarFC2L+FZl7oIYVQv5/XyYbBVBCv3hU4YymF/rVJqeKprNCakNwQp3Kikcyqs7h+506y7LSRGEKaiRrk1CJKLfNMXneSoTGZyksPjHMYHt/9Ul0xhIKomamcB3pGIVT6svbMLncd6cJk6bX3jZC4NZImJvWuTmqFRQcMsHBoYEpnfAtmJy6jLGIIoZPYZMtOGzzWE/VUrsXY4nadXkI+e0ymcpLU9sKekUkzleNlp4WyhZDHGELvBJUTheA5CLklsRAGS1HDyVMJyawzKRc9tgT0VHDBCcs477ilk2/YAofMiRVCo7PwdjB3cGoshIFkuW7DMYRKpnKnb6oDfVE5wF8oiCUHzkACs9rWSnrlWiFVhj+PruueUwgeUM4vSQyhWXcRVH58SeP0oQ4ohHRV1qninX+whAVDA/tVMO0EB83q5+3HLuZ1yw8a9/+VVUYNWgilxl1GSXXRpJZRFhb/0EBfuUnOQF/EwqEBdjz7Ss3JZim97DQlbg4NhN5TCB5DyC+JhdCsuwgqP76ng4UwNIV9oDvJnMESbw3N3TtNVBBf/KM1Nf/frIUwf1Y/V5x1FGcevbDu95RLV5RLkzT0lW1haEax3EMa4lpIO559paa1km7Lml7tmEeXUc9Ml10h5J8khtCKhZD4sXd10ELodZpdZSSJi045goUH1D8BKEZj+yF0/hY2e6CvanXToXOrO7uNpWrZaWoTVwgZUg4qu8sot5QthNmtWwi7nt/DYA4ajvQCjeYhtEKU9EPIaJURxFZn+nsThRDVmZiWkMe5ac/8WhJt7EHl/DKrvxh3vVrafDOe/lQMYar7QDsxjWQqt0qU9EPIUiHM6Ku6mU9mIZTzEKJClVWQxzyE6eFgrYPkZGXdT9mpTVQQP7jsDS19RnqVUdbdp3qFRspft0pUiPshNFvNth3MHSxVLbFNlp5GNSab6RhCekLqFkKGFHzZaU+QrDJ65qXhjiw5dZrPQ2iGqBBbCKNTWMtoMj506hH8w/nHll9PZiEMliIuOmU5b1q1oMpCyGMMoecsBA8qT2/S9XbcZdQZOmkhxD2Vm+930Q4Wz5nB4lTLzXmzSgyWopplNCRxxVlx7+a0i8vzEDIk8qByT1ClENxC6AgDDbTQbJW4dAWZJaaNhyQ+fOoRHLVwaNJto6oYwlRK1Ry9pxByqJWd9pFe6TJdchDyTmIhNJJx3CxRaEG5L2T75uX3fPFpK+rarnqVUT5kT9Mz0+VyUNkthGlN2kLwGEJnOPqQId574rKamcztJCqI0VHKLqNagdy8kneF0DNTKA8q9wbpWaq7jDpDfzHik289uiPfFSm2EEYtm/LXrVLwPIR84NVOe4N0tqwHlacfUaSqZad5DMxORNrFlcc8hJ65O3o/hN6g2kLoGQO4Z0iK2412qUKI5BZCLkjuE+4ymt64y2h6kxS3G8moY1qrFHK+7LSHFELjpXad7qNQUFkpuMto+pG2EKRsEtNaIcp56YqeuTsmJ8JdRtOfZKWRrzKafkRRxULoxt9yNB1KV0haL+kRSVslXV5jm/MkbZH0kKSvp8Y/F8YelvQlBbUo6XxJD0i6X9Itkua1Z5fGxxPTeodEIbiFMP2Y0Rexd2SUvSOjuXS5TEaU89IVk94dJUXANcBZwCrgfEmrxmyzArgCONHMjgYuCeMnACcCq4FjgOOAUyQVgb8H3mBmq4H7gYvbtVPj4aUreof+YgGpuv+tMz1IWqs+98pw18UPIP95CPVMl48HtprZNjPbC9wIbBizzQeBa8zsGQAz2xnGDRgASkA/0Ac8BSg8ZgaLYQj4vxb3ZUK8p3LvUCoWmNVf7Dr/sjM5MxOF8PJId1oIVctOMxSkBvUohMXAE6nX28NYmpXASkk/lHSXpPUAZnYncDuwIzxuNbOHzWwY+FPgAWJFsAr4ynhfLulCSZslbd61a1cDu1aN91TuHUpRwd1F05REITz78nDXK4RutRDqoQisAE4Fzge+LGmOpCOBVwNLiJXIaZJOktRHrBCOBQ4hdhldMd4Hm9l1ZrbOzNbNnz+/aQG9llHvUCoWPKA8TZkV2qzGCqH7JneFnOch1ONkfRJYmnq9JIyl2Q78OMz8H5P0KBUFcZeZvQAg6fvA64BXAMzsl2H834Bxg9XtwoPKvcPsgaJbgtOUmaVKDKEbJ3fpyzKPFk49v5q7gRWSDpdUAjYCm8ZscxPxzZ+wWmglsA34NSGIHKyCU4CHiRXKKknJlP/0MD5leFC5d/jsOcfw6bd1praO01m632VUueXmMQ9hUgvBzEYkXQzcCkTA9Wb2kKTPAJvNbFP43xmStgD7gMvMbLekbwGnEccKDLjFzL4HIOnTwB2ShoHHgfe0f/cqeHG73uHIg2dnLYIzRSSrjF7YM8KBg6WMpWmcZGVUXnVZXevyzOxm4OYxY1elnhtwaXikt9kHXFTjM68Frm1Q3qbxnsqO0/0kFoJZPl0uk5G4jPIYUIYeylROToBbCI7TvczsrzRA6kaFkHgqXCFkTKIIPKjsON3LjL6o7G7pxsS0xFORV9F75u7oQWXH6X4klVcadaWFILcQckHBi9s5zrQgiSN0o/s3KuQ7qNwzCuGQOTM48+gFrFs2N2tRHMdpgSSOkNdZ9kQkE9K8llXpmepfA30R//TudVmL4ThOiyRLT7vR2vegsuM4ThtJXEZ5nWVPRN7zEFwhOI7TVcx0C2HKcIXgOE5XkbiMunGVUWXZaT5ld4XgOE5XMViKg8rdqBB8lZHjOE4b6eqgsuchOI7jtI9yUDmnN9WJKLqF4DiO0z66OTGtkPM8BFcIjuN0FbO6ODEtcpeR4zhO++jmZaceVHYcx2kjM8vLTrvv9hV5HoLjOE77qOQhZCxIE0Re/tpxHKd9VMpfd9/ty5edOo7jtJGk2mk3WwiuEBzHcdpAJajcfbcvdxk5juO0kVldnJiWKIS8lt1wheA4TlfRXyzQF4m+LkxMmxZ5CJLWS3pE0lZJl9fY5jxJWyQ9JOnrqfHPhbGHJX1JocyfpJKk6yQ9Kunnks5tzy45jjOdkcTV567mvOOWZi1Kw+Q9D2HSjmmSIuAa4HRgO3C3pE1mtiW1zQrgCuBEM3tG0sFh/ATgRGB12PR/gVOAHwBXAjvNbKWkAuC9LR3HqYt3rF2StQhNEeW8/HU9LTSPB7aa2TYASTcCG4AtqW0+CFxjZs8AmNnOMG7AAFACBPQBT4X/vQ84Kmw/Cjzd0p44juPknGRlVF4thHpcRouBJ1Kvt4exNCuBlZJ+KOkuSesBzOxO4HZgR3jcamYPS5oT3veXku6R9E1JC1raE8dxnJzTK3kIRWAFcCpwPvBlSXMkHQm8GlhCrEROk3RS2H4J8CMzWwvcCfzNeB8s6UJJmyVt3rVrV5vEdRzH6TzJUtluVghPAunozZIwlmY7sMnMhs3sMeBRYgXxduAuM3vBzF4Avg+8DtgNvAR8O7z/m8Da8b7czK4zs3Vmtm7+/Pl17pbjOE7+SFIncqoP6lIIdwMrJB0uqQRsBDaN2eYmYusASfOIXUjbgF8Dp0gqSuojDig/bGYGfC95D/BGqmMSjuM404685yFMGlQ2sxFJFwO3AhFwvZk9JOkzwGYz2xT+d4akLcA+4DIz2y3pW8BpwAPEAeZbzOx74aM/DnxV0t8Bu4D3tnvnHMdx8kTeS1fUs8oIM7sZuHnM2FWp5wZcGh7pbfYBF9X4zMeBkxuU13Ecp2tJEtNyqg88U9lxHKdT5N1CcIXgOI7TIQo5z1R2heA4jtMhim4hOI7jOFBRBHktXeEKwXEcp0PkvbidKwTHcZwOkawyymsegisEx3GcDlEoCMljCI7jOA6xlZBTfeAKwXEcp5MUCnILwXEcx4mXnuY0hOAKwXEcp5NEcgvBcRzHIQksu0JwHMfpeSJ3GTmO4zgQK4S85iHUVf7acRzHaQ8fPX0lKxbMylqMcXGF4DiO00E2Hn9o1iLUxF1GjuM4DuAKwXEcxwm4QnAcx3EAVwiO4zhOwBWC4ziOA7hCcBzHcQKuEBzHcRzAFYLjOI4TkJllLUPdSNoFPN7k2+cBT7dRnHbisjVPnuVz2Zojz7JBvuWrJdthZjZ/sjd3lUJoBUmbzWxd1nKMh8vWPHmWz2VrjjzLBvmWr1XZ3GXkOI7jAK4QHMdxnEAvKYTrshZgAly25smzfC5bc+RZNsi3fC3J1jMxBMdxHGdieslCcBzHcSagJxSCpPWSHpG0VdLlGcuyVNLtkrZIekjSn4XxT0l6UtK94XF2RvL9StIDQYbNYWyupP+Q9Ivw98AM5HpV6tjcK+k5SZdkedwkXS9pp6QHU2PjHivFfClcg/dLWpuBbJ+X9PPw/d+RNCeML5P0cuoYXpuBbDXPo6QrwnF7RNKZGcj2jZRcv5J0bxjv9HGrde9o3zVnZtP6AUTAL4HlQAm4D1iVoTyLgLXh+WzgUWAV8CngYzk4Xr8C5o0Z+xxweXh+OXB1Ds7pb4DDsjxuwMnAWuDByY4VcDbwfUDAa4EfZyDbGUAxPL86Jduy9HYZHbdxz2P4bdwH9AOHh99y1EnZxvz/C8BVGR23WveOtl1zvWAhHA9sNbNtZrYXuBHYkJUwZrbDzO4Jz58HHgYWZyVPnWwAbgjPbwDOyVAWgDcCvzSzZpMU24KZ3QH8dsxwrWO1AfgXi7kLmCNpUSdlM7PbzGwkvLwLWDJV3z8RNY5bLTYAN5rZHjN7DNhK/JvuuGySBJwH/OtUff9ETHDvaNs11wsKYTHwROr1dnJyA5a0DDgW+HEYujiYdtdn4ZYJGHCbpJ9KujCMLTCzHeH5b4AF2YhWZiPVP8o8HLeEWscqb9fh+4hnjwmHS/qZpP+WdFJGMo13HvN03E4CnjKzX6TGMjluY+4dbbvmekEh5BJJs4B/By4xs+eAfwSOANYAO4hN0yx4vZmtBc4CPiLp5PQ/LbZFM1uaJqkEvA34ZhjKy3Hbj6yPVS0kXQmMAF8LQzuAQ83sWOBS4OuShjosVm7PY4rzqZ6IZHLcxrl3lGn1musFhfAksDT1ekkYywxJfcQn9Gtm9m0AM3vKzPaZ2SjwZabQLJ4IM3sy/N0JfCfI8VRiaoa/O7OQLXAWcI+ZPQX5OW4pah2rXFyHkt4DvAV4V7h5ENwxu8PznxL76Vd2Uq4JzmNejlsReAfwjWQsi+M23r2DNl5zvaAQ7gZWSDo8zC43ApuyEib4Ib8CPGxmf5saT/v23g48OPa9HZBtpqTZyXPiIOSDxMfrgrDZBcB3Oy1biqpZWh6O2xhqHatNwJ+ElR+vBZ5NmfkdQdJ64M+Bt5nZS6nx+ZKi8Hw5sALY1mHZap3HTcBGSf2SDg+y/aSTsgXeBPzczLYnA50+brXuHbTzmutUhDzLB3G0/VFiDX5lxrK8ntikux+4NzzOBr4KPBDGNwGLMpBtOfGKjvuAh5JjBRwE/BfwC+A/gbkZHbuZwG7ggNRYZseNWDHtAIaJ/bPvr3WsiFd6XBOuwQeAdRnItpXYp5xcd9eGbc8N5/te4B7grRnIVvM8AleG4/YIcFanZQvj/wx8aMy2nT5ute4dbbvmPFPZcRzHAXrDZeQ4juPUgSsEx3EcB3CF4DiO4wRcITiO4ziAKwTHcRwn4ArBcRzHAVwhOI7jOAFXCI7jOA4A/w8MA+4LIPW52wAAAABJRU5ErkJggg==\n", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import matplotlib.pyplot as plt\n", + "import matplotlib.ticker as ticker\n", + "\n", + "plt.figure()\n", + "plt.plot(all_losses)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": 234, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[]" + ] + }, + "execution_count": 234, + "metadata": {}, + "output_type": "execute_result" + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYQAAAD8CAYAAAB3u9PLAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvhp/UCwAAIABJREFUeJzt3Xl4nOV18P/vmdG+jxZrtSzJlvCCjXeDHXZIDSSQZgNaEsqbQGmbtgm/pIUutKElb9K+v6ZJS9oQQuBtWZLQhEAgLCEEGzDe8G6DF3mRbGu1pNE2kmbmfv+YZ8YjaUYaSbNI1vlcly5Lj56ZeZ5EzJn7Pvc5txhjUEoppWyJvgCllFLTgwYEpZRSgAYEpZRSFg0ISimlAA0ISimlLBoQlFJKARoQlFJKWTQgKKWUAjQgKKWUsiQl+gImorCw0FRVVSX6MpRSakbZuXNnmzGmaLzzZlRAqKqqYseOHYm+DKWUmlFE5GQk5+mUkVJKKUADglJKKYsGBKWUUoAGBKWUUpaIAoKIbBSRD0XkqIjcH+acz4rIQRE5ICJPBx3/lojst75uDTr+hIgcF5Hd1tfyqd+OUkqpyRp3lZGI2IFHgOuBRmC7iLxgjDkYdE4t8ACwwRjTISJzrOM3ASuB5UAq8FsR+ZUxxmk99GvGmOeiekdKKaUmJZIRwlrgqDGm3hgzCDwL3DLinLuBR4wxHQDGmBbr+GJgkzHGbYzpBfYCG6Nz6UoppaIpkoBQDjQE/dxoHQtWB9SJyDsi8p6I+N/09wAbRSRDRAqBq4G5QY97WET2isi3RSR1kvcwaU1dLl7edzbeL6uUUtNStJLKSUAtcBVwO/ADEckzxrwGvAy8CzwDbAE81mMeABYCa4B84C9DPbGI3CMiO0RkR2tra5Qu1+fbrx/mj596n94Bd1SfVymlZqJIAsJphn+qr7COBWsEXjDGDBljjgOH8QUIjDEPG2OWG2OuB8T6HcaYs8ZnAPgRvqmpUYwxjxpjVhtjVhcVjVt5HTGv1/DGB76ZrTOd/VF7XqWUmqkiCQjbgVoRqRaRFOA24IUR5zyPb3SANTVUB9SLiF1ECqzjy4BlwGvWz6XWvwJ8Atg/5buZgL2nu2jrGQCgsUMDglJKjbvKyBjjFpEvAa8CduBxY8wBEXkI2GGMecH63UdF5CC+KaGvGWPaRSQN2Ox7z8cJ3GGM8c/PPCUiRfhGDbuBe6N9c2N541Bz4PtGHSEopVRkze2MMS/jywUEH3sw6HsD3Gd9BZ/jwrfSKNRzXjPRi42mXx9qYfU8B3saO2ns6EvkpSil1LQwKyuVz3T2c+isk+sXF1OWl85pnTJSSqnZGRD8yeRrF82hwpGuOQSllGK2BoRDzcwryGB+URbleemc1hyCUkrNvoDQN+jm3WPtXLuwGBGhwpFBa/cAriHP+A9WSqkL2KwLCJuPtDHo9nLdojkAVDjSAa1FUEqpWRcQ3jjUTHZaEmuq8wEoz/MFBM0jKKVmu1kVELxew28+aOXKuiKS7b5br8jPANA8glJq1ptVAcFfnXytNV0EUJydit0mWouglJr1ZlVAeONQMzaBq+rOB4Qku43S3DStRVBKzXqzKiD4qpPzcWSmDDtenqe1CEopNWsCwmmrOjl4usivwpGhOQSl1Kw3awLCb6xmdtcuKh71u3JHOk1OF4Nub7wvSymlpo1ZExB+faiFqoIM5hdljvpdhSMdY+Bsl44SlFKz16wICL0DbrYca+caqzp5pAqrFkETy0qp2WxWBIS3j7Yx6DlfnTxShcNXi6CJZaXUbDYrAsLI6uSRSnLTEJkeG+UMuD18940j9A9qbyWlVHzNioAwryCT29dWBqqTR0pJslGSkzYtitN2nOjgX14/zFuHWxN9KUqpWSaiHdNmuj+5esG455RPk41yWrt9+zw3aYJbKRVns2KEEInpslGOPyCcdboSfCVKqdlGA4KlwpFBk9OF25PYWoS2Hv8IQQOCUiq+NCBYyh3peLyGpgR/Mg+MEDQgKKXiTAOCxb9RTqLzCK06QlBKJYgGBMt02SjnfFLZhTEmodeilJpdIgoIIrJRRD4UkaMicn+Ycz4rIgdF5ICIPB10/Fsist/6ujXE474rIj2Tv4XoKPNXKye4FqGtZwCbwKDHy7newYRei1Jqdhk3IIiIHXgEuAFYDNwuIotHnFMLPABsMMYsAb5sHb8JWAksB9YBXxWRnKDHrQYc0bmVqUlLtlOUnZrQWgSP13Cud5DaOdmA5hGUUvEVyQhhLXDUGFNvjBkEngVuGXHO3cAjxpgOAGNMi3V8MbDJGOM2xvQCe4GNEAg0/wz8xdRvIzoqHOkJHSG09w7gNbC0IhfQPIJSKr4iCQjlQEPQz43WsWB1QJ2IvCMi74nIRuv4HmCjiGSISCFwNTDX+t2XgBeMMWcnf/nRleiNctq6fVNES8t9AUFrEZRS8RStSuUkoBa4CqgANonIUmPMayKyBngXaAW2AB4RKQM+Y50/JhG5B7gHoLKyMkqXG1qFI4NXDzTh9RpsttFdUWPNv8JoYUk2STbRamWlVFxFMkI4zflP9eB7wz894pxGfJ/2h4wxx4HD+AIExpiHjTHLjTHXA2L9bgWwADgqIieADBE5GurFjTGPGmNWG2NWFxUVTeDWJq7ckc6Qx9BirfSJN/8Ko+KcNIpz0jSHoJSKq0gCwnagVkSqRSQFuA14YcQ5z2N92remhuqAehGxi0iBdXwZsAx4zRjzkjGmxBhTZYypAvqMMeM3HIqxQC1CZ2ISy/4q5aLsVEpy0zSHoJSKq3EDgjHGjW++/1XgEPATY8wBEXlIRG62TnsVaBeRg8CbwNeMMe1AMrDZOv4ocIf1fNNSRYJrEVq7B0hPtpOZmqQBQSkVdxHlEIwxLwMvjzj2YND3BrjP+go+x4VvpdF4z58VyXXEWrkjsQGhrWeAouxUAEpz0vjNoRaMMSF3eVNKqWjTSuUgGSlJ5GemRDUgPLa5nr/6+b6Izm3tPh8QSnLT6B/y4OyftgMqpdQFRgPCCNGuRfi/W07yyz1nIjq3rWeAwqwUAEpzfaOVs05daaSUig8NCCP49kWITlL5ZHsvp8714XS5cbqGxj1/5AgBtFpZKRU/GhBG8O+cFo3GcpuCtsEcr4vqkMdLR98QhVlWDsEKCJpYVkrFiwaEESocGQy4vbT1TL2x3FuH20iyCtzGCwjt1uv5RwhF2anYREcISqn40YAwwvk22FObNhp0e9lyrI3rFhVH9Hz+orQia4SQbLdRlJ2q1cpKqbjRgDBCRX502mDvOtVB76CHT6woIy3ZNu7z+YvSCq0RAkBJbrqOEJRScaMBYYRobZSz6UgrdpuwfkEhZXnjr1waOUIAXy2C5hCUUvGiAWGE7LRkctOTp7yV5qbDbayszCMnLZkKR8a4AaY1qG2Fn1YrK6XiSQNCCL422JPPIbT3DLD/TBeX1xYFnm+8ANPaPUB2ahJpyfbAsdLcNLoH3HRHsGRVKaWmSgNCCFMtTnv7aBvGwBV1RYHna+8dpH/QE/YxrT0Dw/IHcL4WoVn3RVBKxYEGhBDKHb6NciZbi7DpcBt5GcmBjW7K88bvotrWPTAsfwBB1co6baSUigMNCCFUODLoG/TQ2TfxqRpjDJuPtPKRBYXYrRqEigia5rUGNbbzK9VqZaVUHGlACGGu9Qb+9tG2CT/2g6ZuWroHuKL2/GY+5Y7xl7K2dZ/vY+Q3J8cXIM52akBQSsWeBoQQrqgrYllFLn/5P3s5cKZrQo/dfMTXruLyusLAsTnZaSTZJOwIwTXkwelyjxohpCbZKcxKoUkb3Cml4kADQghpyXYe+/xqctOT+cITOyaU1N10uI264qzA/D+A3Sa+WoQwAaG919e2onBEDgF8iWWdMlJKxYMGhDDm5KTxwzvX0O0a4gtPbqdvcPx9CfoHPWw7cS6w3DRY+RjFaYGitOwQASEnXWsRlFJxoQFhDIvLcvju7Ss4eMbJl5/djdc79qqj9463M+j2BpabBisfo622PyCEGiGU6ghBKRUnGhDGce2iYv7mpsW8drCZb73ywZjnbj7cRkqSjXXV+aN+V+FIp6V7gEG3d9Tv2kJUKfuV5KbR1T8U0QhFKaWmQgNCBO7aUMXnLp3H9zfV8+y2U2HP23SklXXV+cOqjf3K89IxBs6G6F7qHyEUjFhlBLovglIqfjQgREBE+LuPL+aKuiL+5vn9fPv1wzScGz79c6azn6MtPcOWmwYrH6MWoa1ngNz0ZFKTRgeSEg0ISqk40YAQoSS7jX//vRVcXlvId39zhMv/6U1ue3QLz+1spHfAHdgdLVT+AGCuIwMIvVFO8NaZI2m1slIqXpISfQEzSU5aMj+6ay2NHX38/P3TPPd+I1/96R4e/MV+ctOTKc5Jpa44K+RjS3LTsAk0hlhp1NYzuigt8Lgca4Sg/YyUUjEW0QhBRDaKyIciclRE7g9zzmdF5KCIHBCRp4OOf0tE9ltftwYd/6GI7BGRvSLynIiEfiedhiocGfzptbX89qtX8dN7L+Pjy8rodrm5cWkpIhLyMcl2G8U5aSFXGvlGCGkhH5eeYicvIzlk7sGvs2+QngFNOiulpmbcEYKI2IFHgOuBRmC7iLxgjDkYdE4t8ACwwRjTISJzrOM3ASuB5UAq8FsR+ZUxxgl8xfoXEfkX4EvAN6N6dzEmIqypymdNVT7f+ORSbKFjQUC4NtitIdpWBCsZY6McYwy3fv89MlLt/M+967GNdxFKKRVGJCOEtcBRY0y9MWYQeBa4ZcQ5dwOPGGM6AIwxLdbxxcAmY4zbGNML7AU2Wuf4g4EA6cDkWotOE3abhB0d+IVqq9036KZ30BM2hwBQlhd+K833T3XwYXM3u0518uLeMxO/cKWUskQSEMqBhqCfG61jweqAOhF5R0TeE5GN1vE9wEYRyRCRQuBqYK7/QSLyI6AJWAj82yTvYcYod/je2N2e87UIbd2+thUjW18HG2vntJ/uaCQjxc7Ckmz+6ZUPcQ2F33NBKaXGEq1VRklALXAVcDvwAxHJM8a8BrwMvAs8A2wBAu9Yxpi7gDLgEHArIYjIPSKyQ0R2tLa2RulyE6M8LwOP19Bs1R3A+a0zR26OE6w0J4323sFRb/Z9g25+ufcsNy4t5cGPL+Z0Zz+Pv3M8NhcfxodN3Rw844zrayqlYiOSgHCaoE/1QIV1LFgj8IIxZsgYcxw4jC9AYIx52Biz3BhzPSDW7wKMMR5801CfCvXixphHjTGrjTGri4pCL+mcKfz7IgTnEQJ9jMYZIQC0OAeGHf/VviZ6Btx8ZlUF6+cXct2iYr735rFA5XM8/O3z+/ns97dwtKUnbq+plIqNSALCdqBWRKpFJAW4DXhhxDnP4xsdYE0N1QH1ImIXkQLr+DJgGfCa+CywjgtwMzB2X4gLwPnitPMrjcZqW+F3vhZheP7hpzsbmFeQwVqrVcYDNy7ENeTh268fHvUcsXK6s5+eATf3/NcOnLr3s1Iz2rgBwRjjxrcC6FV8Uzs/McYcEJGHRORm67RXgXYROQi8CXzNGNMOJAObreOPAndYzyfAkyKyD9gHlAIPRfnepp3AVpojRggikJ85xiqj3NG1CKfa+3iv/hyfWVURSGbPL8rijkvn8cy2Uxxu7o7FLQzj9Rpaul2sq87nVHsf9/14/AaASqnpK6LCNGPMy/hyAcHHHgz63gD3WV/B57jwrTQa+XxeYMMkrndGS0u2U5iVOmylUWvPAPkZKSTbw8fmkhBbaT63swER+OTKimHn/vm1tfzs/Ua+8fIhnrhrbZTvYLhzfYMMeQw3XFzCDReX8PcvHuS7vznCl6+ri+nrKqViQ1tXxJmvDfb5gODbOjP8dBFAVmoS2WlJgZVGXq/hf94/zUcWFFKWlz7sXEdmCn96TS2//bA10E4jVvwbB5XkpnHn+io+tbKCf/31EV4/2BzT11VKxYYGhDirGLFRTmtP+D5GwXz7Ivge9+6xdk539vPZ1XNDnvv59fOozM/g4ZcO4YnhFI4/IBTnpCEiPPy7F7O0PJf7frybY62aZFZqptGAEGf+4jT/XPtYfYyCleSe3zntJzsayElL4vrFxSHPTU2yc/8NC/mwuZuf7GgIeU40NHX5EuLFVr+ltGQ73//cKlKSbNzzf3fQrUlmpWYUDQhxVu5IZ9Dtpa1nAGPMmJ1Og5Xm+HZO6+of4tUDTdyyvDzkvgt+N1xcwup5Dr79+mEG3LEpVmt2uhAZvkKqLC+df/+9lZxo7+Mvntsbk9dVSsWGBoQ48680auzsp3fQg2vIO24OAXzz9K09A/z8/UYG3N6w00V+IsKfXVtLS/cAL+45G5VrH6nZ6aIwK3VUQvyy+QXcfXkNv9rfpEtRlZpBNCDEWXlQcVqgKC3CHIIx8OimehaWZHNxec64j7m8tpC64ix++PZxfAvBoqvJ6aI4J/S1X1rjq43QKmalZg4NCHEWGCFMMCD4l56e6XLx6aDag7GICF/4SDWHzjrZUt8+hasOranLFdivYaQlZbkAHIhhQNh0uJWntp6MSbBTajbSgBBn2WnJ5KYnc7qzL1ClHMmUkb9aOckm/O6Kkb0Fw7tleTkFmSk8/nb0exy1dA8EEsojFWWnMic7lQNnuqL+un7/+NJB/vrn+3ngZ/uGNQxUSk2OBoQE8O+LMJkRwrWL5lAQQQDxS0u28/uXzuPXh1qoj+JS0AG3h3O9g2EDAsCSspyYTRmd7erncHMPC0uyeXZ7A/f+9/va6VWpKdKAkAAVVnFaW88ANgFHxvjLTnPTk/nrGxfxtd9ZOOHX+9yl80ix2/jROycmcbWh+RvthZsyAt+00ZGWnpi8UW8+3AbAv962nK/fvIQ3Pmjmjse20tk3GPXXUmq20ICQAOVWLUJr9wAFWanYI9zl7O4ralgwZ+I7jRZlp3Lz8jKe29kYtTfMQFFa7tgjBI/X8GFT9PsqvXWkleKcVC4qzubO9VX82+0r2NvYxWf+c8uY240qpcLTgJAA5Xnp9A16ONLSE1H+IBq+8JFq+oc8PLMtOoVqTYEq5fDXH6vEssdrePtIG5fXFgWS6x9bVsYTd63hbJeLT33vXY62xL65X7S0OF0MaQ5ETQMaEBKgwpEBwL7TXRHlD6JhUWkOGxYU8OS7J8Z884l0xY6/anqsKaO5+elkpyVFPbG8p7GTrv4hrqgbvj/G+gWFPHvPpQx6fPtM9w64o/q60dQ36Oa5nY18+j/eZe033uDRTfWJviSlNCAkgn+jnEG3d8yNcaLtCx+ppsnp4uV9owvVXEMevvmrD7job19h58lz4z5XS/cAqUk2ctOTw54jIiwuzYn6CGHT4VZE4PIFhaN+d3F5Lv9xx0raewf55TTcY3pfYxd//fN9rHv4Db760z1WYj6Vt4+0JfrSlIqs/bWKrvKgDqWF2eMnlKPlqro51BRl8sO3j3PzJWWB6ZYtx9p54Gd7OdHu27hn+4kOVs3LH/O5mrpclOSmjVsPsaQsl6e3ncTjNRHnSsaz6XAry8pzcYTZQ2L1PAe1c7J4ZlsDt66pjMprTpbHa9jT2MlbH7by+sFmDp51kppk46alpdy2tpI1VQ6+/uJBfry9gSGPd8w26ErFmgaEBMjLSCYzxU7voCeuIwSbTbhrQzV/+/x+dpzsoK44m2/+6hDPbGugMj+Dp7+4jj97dhfHW3vHfa4mp4vi7PDTRX5LynJwDXmpb+2htjh7yvfQ1TfE7oZO/uTqBWHPERFuX1vJQ788yKGzThaVjl/VPVHPbjvFf289yVxHBpUFGVTmZzAvP5N5BRmkJNl4+0gbvz3cyuYjrXT2DWETuGRuHv9wyxJuXl4+bGS1ap6DJ949waGzTpZV5EX9WpWKlAaEBBARyh3pHG7uiVsOwe9TK8v5P69+yD/88iBNXS7aegb4wytq+PJ1daSn2KkqyOR4+/gBodnpiujNa4nVYuPAGWdUAsI7x9rwGkblD0b65MpyvvnKBzy77RRfv+XiKb/uSM9ub+B0Rz99gx7eONTCYIi8TGFWKtcuLOaqi4r4yILC8COaKgcAO092aEBQCaUBIUHK86yAEMcRAkBGShK/v66S7/32GItLc/jhnWtYWpEb+H11YSa/HWdjHWMMzU4XJWOsMPKbX5RFSpKNA2e6+MQEKqzD2XS4lezUJJbPHfuNMy8jhRsuLuHnu05z/w2LSE8J3xl2ogbcHg6ecXLXhioeuHERHq+hyeniZHsvDef66Ha5ubSmgMWlOdgimCYrzU2nPC+dHSc7uGtDddSuU6mJ0oCQIP4md4VxHiEA/Nm1tSyfm8fVC+eMmrOuLsrkpzsb6XYNkZ0WOmHs7HfjGvKOWaXsl2y3sbAkOyqJZWMMmw63sn5BQURz7bevreQXu8/w8r6zfGpVxbjnR+rgGSeDHm8gKNltQnme702d+ZN7zlXzHGw7fg5jTER9qpSKBc1gJUh1YRZ2m0Q0Dx9tacl2PrqkJOSbanVBJgAnrQRzKE1BO6VFYkmZb6XRVJvQHWvt4UyXa9zpIr911fnUFGby7PZTU3rdkXY3dAKwvDJ60zur5jlocrqG7aanVLxpQEiQ31tbyc/+aD25GeGXbSZCdZEvINS3hc8jBO+lHInFZbl09Q9N+c3uLatdxRW1kQUEEeG2tXPZfqKDI83RK1Tb3dBJcU5qoOFgNKyadz6PoFSiaEBIkPQUO5eMMw+eCFXWCOHEGAEhMEKIcHSzpOx8YnkqNh1upaYok7n5GRE/5lMrK0i2C89uj95WorsbOlkx1xG15wNYWJJNZoqdHSc0IKjE0YCghklLtlOWm8bxsUYIVpXynAiSygCLSnKwydgBoX/QwwM/2xf2E7JryMPW4+0Rjw78CrJS+ejiEn72fmNUmuy19wxwsr0vqtNFAEl2GysqHezQEYJKoIgCgohsFJEPReSoiNwf5pzPishBETkgIk8HHf+WiOy3vm4NOv6U9Zz7ReRxEZlecyezWHVR5thTRt0uHBnJY+7pHCw9xU5NURYHx2hh8dOdDTyz7RR3Pr6NPdYcfbBtx8/hGvJyZYT5g2C3r62ko8+3F/VU7Wm08gcxGN2tmufgwyYn3brtqEqQcQOCiNiBR4AbgMXA7SKyeMQ5tcADwAZjzBLgy9bxm4CVwHJgHfBVEfFXCT0FLASWAunAF6NxQ2rqqgszOd7aEzYJ3NQVfmOccC4uC9/Cwu3x8tjm4ywuzSE/M4XPP75t1D4Kmw63kmK3sa5m7ArqUNbPL2BufjrPRqGx3+5TndgElgUt1Y2W1VUOvAZ2nRodEJWKh0hGCGuBo8aYemPMIPAscMuIc+4GHjHGdAAYY1qs44uBTcYYtzGmF9gLbLTOedlYgG1A9NYFqimpKsjE6XLT0Rf6k2qz0zXhgLCkLJezXS7O9Y5uv/3KgSZOnevjz66t5akvriMjxc7nfrh1WMfSTUdaWVPtICNl4iulbTbhtjWVbKlvH3MqLBK7Gjq5qCRnUtcxnhWVDmyCThuphIkkIJQDwR+tGq1jweqAOhF5R0TeE5GN1vE9wEYRyRCRQuBqYG7wA62pos8Br0zmBlT01VgrjY63hd5hrckZfi/lcM4nlodPGxlj+P5b9dQUZnL94mLm5mfw1BfXISL83g+2cqKtN7A72kTzB8E+s6oCu02mtATV6zXsbuiMyXQRQFZqEgtLciJqLqhULEQrqZwE1AJXAbcDPxCRPGPMa8DLwLvAM8AWYGRm73v4RhGbQz2xiNwjIjtEZEdr69gVtCo6qgt9m/Acbxtdi+D2eGnrGRhzH4RQFodZabTlWDv7TnfxxctrAs3vaoqyeOqL6xjyePn9x7YGpnoirT8IZU5OGtctmsP/7GwMLJudqPq2XrpdblZEOaEcbHWVg12nOnWPaJUQkQSE0wz/VF9hHQvWCLxgjBkyxhwHDuMLEBhjHjbGLDfGXA+I9TsAROTvgCLgvnAvbox51Biz2hizuqho8m8IKnIVjnTsNgk5QmjtGcCYsXdKCyUvI4XyvPRRAeH7m+opzErhkyuHDzovKsnmv76wDqdriO+8cYQ52aksLJlaL6Q7L6uivXeQdd94g43/uolvvHyIt4+0Rbz6aNcp31TOihguF141z0HfoIcPorTLXP+gh8c213PjdzZHfV8KdeGJJCBsB2pFpFpEUoDbgBdGnPM8vtEB1tRQHVAvInYRKbCOLwOWAa9ZP38R+B3gdmOMfhyaRpLtNirzM0LOt0eyMU44vorl829Kh846eetwK3dtqA65Yuni8lye/F9ryUyx89ElxVNu6bB+QSGv/PkV/OXGhTgyUvjRO8e544dbWf7Qa9z1o21j1l6Ar/4gOzWJ+UUT38Y0UqurfEnzHSemNm3kDwSX/9Ob/ONLhzh41smLe0bvg6FUsHEzY8YYt4h8CXgVsAOPG2MOiMhDwA5jzAvW7z4qIgfxTQl9zRjTLiJpwGbrP2QncIcxxr+N1X8CJ4Et1u9/Zox5KMr3pyapujAz5JRRs3MAiLxtRbAlZbm8fqiZ3gE3malJPLqpnowUO3esmxf2MSsrHbx7/7WkJkdndvOikmwuKsnmj66aT++Am63H29l0uI2f7mjgW698wH/csSrsY3c3dHLJ3LyIGtZNVnleOqW5aew42cEfTKLRXf+gh6e2nuQ/36qnrWeADQsK+N61K/nmrw6x9Xh7DK5YXUgiWiphjHkZXy4g+NiDQd8bfNM+9404x4VvpVGo59TGetNYVUEmW4614/WaYW+AzRPsYxRsSVkOxvhGBqV56by45wyfv6xq3PYdsWrvkZmaxDULi7lmYTGpyTZ+sKme0539wzYw8uu3pnH+6MpJdq+bgFXzHJNqYXG8rZfP/OeWYYFgbbVvxLGupoAfbKqnb9AdkxVS6sKglcoqpOqiTPqHPDR3D0/ANjldJNmEgjC9/ccSvDfC428fxwBfuHx6tHv+3KW+Ucp/v3cy5O/3ne7C4zUxW2EUbNU8B2e7Jt7o7o1DzbT1DPDUF9fx1BcvDQQD8DX6c3uN9kpSY9KAoEKqKfQvPR0+r97c5WJOduqkpk1KctLIz0xhy7F2ntl2io8vKw35aTwRKhwZXL+4mGc+y0Q6AAAgAElEQVS2nQqZZN7d4HsjjXbLilBWz5tcHqGxo5+s1CTWzy8Y/ZxV+dhtwrbjuqRVhacBQYVUFS4gdLsmvMLIT0RYUpbDKwea6Bv0cM8VsZ9+mYg/WF9NZ98QL+w+M+p3uxs6mZufTmEcNjRaVJpNRop9wp/mGzv6qHCkh0y+Z6UmcXFZDlvrNSCo8DQgqJBKc9JITbKNWnnT1DXxorRg/nqEK+qKAt9PF5fW5HNRcTZPvHtiVNuOXac6WR7lDqfhJNltLJ+bN+HOpw3n+qlwhO8Eu66mgN0NnVFp8qcuTBoQVEg2m1grjUaMEJwT72MUbGWl70313itrpnR9sSAi/MGGKg6edbI96M242enibJcrLvkDv9XzHHzQ5KRnwD3+yfgqvhs6+pibH34Kbm1VPoMe77TpldQz4NYCvGlGA4IKq6pgeNfTngE3PQPuKQWEjy4u5tf3XcH6+YXRuMSo+8TycnLTk3ny3ROBY/430FhWKI+0qirfanQX2Siho2+IvkHPmCOENdX5iDBtlp/e+v0t3PCdzZwaY3c+FV8aEFRY1UWZNJzrC3yKO79T2uTn0UWEBXOmVnEcS+kpdm5dM5dXDjRxxlrls6uhg2S7sLg0flNcKyrzEIl8B7WGc7431bmO8COE3PRkFpXkTIvEstdr+LCpmyMtPXzie+9Mi2tSGhDUGKoLMxnymMDyR//GOFMZIcwEn7t0HsYYntrqW4K6+1Qni0tzIt7/IRpy0pK5qDg74umdhg4rIIyzm9y6mnzeP9XBoDuxUzVtvQO4vYb/taGavPRkfv+x9/jpjujtaqcmRwOCCqu6cPj+yv6ahAs9IMzNz+DaRcU8s62BvkE3+053saIyPgnlYLXF2RxrDd1xdqSGc76gXTHGCAFgXXUBriEvexsTm0fwt0C5tCafn//xBtZVF/C15/byv391CI839D4cKvY0IKiw/AHBv9KoqcvXtmIqq4xmirvWV3Gud5D//7XD9A164ppQ9qspzOR0Z39Eq4IaO/rIy0gmO23sqm5/sdrWBE/R+ANCaW46uRnJ/OiuNdxxaSXff6ueP/yvnfRGmExX0aUBQYVVkJlCdlpSYKVRs9NFdmoSmakXfuuDy+YXUFecxY/eOQ7EZsvM8dQUZWIMnIwg6drQ0c/cMRLKfvmZKdQVZ/FefWITy4EWKFY+Ktlu4x8/sZSv37yE33zQzL3/vTORlzdraUBQYYkMX3ra1OVizgT3QZipRIQ711fhNeDISGZewfhvttFWY+1LUR/BtFHjubGXnAZbV13AzpMdDCVwyefZLl8LlMLM4X9Pd66v4ivX1bH5SBuNHbr6KN40IKgxBQeE5m4XJZOsUp6JfndFOTlpSaysdEy59fZkVBcNz+GE4/UaGjvHLkoLtq4mn75BT9g9ruOhydqGNVQLlFuW+/bGeHmftuuONw0IakzVQfPYzV0T30t5JstISeLpuy/l729ekpDXz0pNojgnlfrWsQNCa88Ag27vmEtOgwXyCAmcNmrqcoXdda+yIINlFbm8tFcDQrxpQFBjqi48P4/d0j21KuWZ6OLy3HGXcsZSTWEW9WH2tvbz1yBURHidc7LTqCnKTGhiucnpojQ3fAD72LJS9jR2adFanGlAUGPyrzTacfIcbq+ZFSuMppPqokzqW3tH9VYK1tjhW3Ia6QgBfHmE7cfPJWSJpzHGGiGE/1u6cWkpAC/ptFFcaUBQY/J3PX3P6pI520YIiVZTmElX/xDnegfDnhMYIUSYQwDf/gjdA24OnY1/HqF7wE3foIfSMfJRFY4MVlTm8cu9ozvPqtjRgKDGlJOWTGFWSmCZYrh5XxUb/v2bQ+1v7dfQ0UdRduqEKqnX1SSuHiFQ8T7OAoWblpZy4IxzzHsH6HYN8d03jtA3mJjahbaegYS8bixoQFDjqi7MpLXbKkqbRauMpoMa/0qjMRLLvrbXE9toqDQ3ncr8jIQkls8GitLG/lvyTxuNt9roO78+wr+8fphfJiAJfbSlm7UP/5otx6ZHw8Cp0oCgxuXPI4hAURw2iFHnleelk2wXjo2RWG7s7IuoKG2kddX5bDtxDm+c8whN/iaJ40w/luWls3qegxf3hJ82OtHWy5NbTgCw6XBrtC4xYluPn8Nr4IOmxC3hjSYNCGpc/jxCYVYqSXb9k4mnJLuNeQWZYUcIbo+XM52uiIvSgq2rKaCzb4jDLd1TvcwJ8betiKTI8aZlpXzQ1M3RltAB8X//6hApdhtX1hXx9tG2uCfJ9zZ0AQQ64850+l+3Gpd/f2VdYZQYNSE2KvI72+XC4zWTHiEAcd9Ws8npoiAzhdSk8XMeNy4tRYSQNQnv1bfz6oFm/vjqBXxyZTmdfUNxb9q3x3q90xoQ1GxRbbVQ0BVGiVFdlMnJ9t6Qu4v5l5xOZIWRX4XDl0f4zQctU77GiRhvyWmw4pw01lTl89K+4dNGXq/hH186SFluGl/4SDWX1xYhApsOt8XikkPqH/RwxBq5nO6YRQFBRDaKyIciclRE7g9zzmdF5KCIHBCRp4OOf0tE9ltftwYd/5L1fEZEpuf2WQog0MdHVxglxvzCLIY8JvDmH+z8PggTnzISEW5YWsI7R9vo7Au/rDXamrpc4yaUg31sWSmHm3s43Hx+autnu06z/7STv7xhIWnJdvIzU1hWnsumI/HLIxw824XHayjKTp09IwQRsQOPADcAi4HbRWTxiHNqgQeADcaYJcCXreM3ASuB5cA64Ksi4t926h3gOuBkdG5FxUpasp2v37yE31tXmehLmZX8K41CTRs1nuvDJoxZ9TuWm5aW4vYaXjvYPKVrnIgmp2vcJafBNl5cgk0IrCLqG3Tzz69+wCVz8/j4srLAeVfUFbHrVAddfUNRv+ZQ9lj5g41LSmjrGYyoTfl0F8kIYS1w1BhTb4wZBJ4Fbhlxzt3AI8aYDgBjjH8MuhjYZIxxG2N6gb3ARuucXcaYE1G4BxUHd66vYklZbqIvY1aqsWoRQm2W09DRT0lOGilJk5v9XVqeS4UjPW6N5AbcHs71Dk4oHzUnO4111QW8tPcMxhge3VRPs3OABz+2aFhzvCvrivAaeOdYfKaN9jZ2UpKTFmiNfiEkliP5KyoHgve2a7SOBasD6kTkHRF5T0Q2Wsf3ABtFJMOaFroamDvVi1ZqNnFkJJObnhyy62ljR1/EPYxCERFuWlrKO0fb4vLJusU5uXqWm5aVcqy1l98ebuX7b9Vz07JSVs3LH3bO8rl5ZKcl8daH8Zk22tvYxbKKXMqtGpALYdooWknlJKAWuAq4HfiBiOQZY14DXgbeBZ4BtgATGleJyD0iskNEdrS2xn+dsVKJJiLUFGWG3Beh4VxkG+OM5calpQx5DK8dbJrS80TCX5Q20RVrN1jTRl966n08XsP9GxeOOifJbmPD/EI2HWkds/dTNHT1D1Hf1sslc/Moz7MCwgWQWI4kIJxm+Kf6CutYsEbgBWPMkDHmOHAYX4DAGPOwMWa5MeZ6QKzfRcwY86gxZrUxZnVRUdFEHqrUBaOmMGtUDmHA7aG52zXhKuWRllX4po3i0UjOX5Q2kaQyQEFWKuvnF9I76OGuj1SF7UB75UVFnO1yha1biJb9p335g2UVuZTkpmGT2TNltB2oFZFqEUkBbgNeGHHO8/hGB1hTQ3VAvYjYRaTAOr4MWAa8FqVrV2rWqCnKpNk5QE/QXsNnOl0Yw5Tbc4sIN8Zp2qipy/emOZGkst/nL5vHJRW5/MnVC8Kec0Wd70PjWzGuWvbXHywrzyPZbqM4J43G2RAQjDFu4EvAq8Ah4CfGmAMi8pCI3Gyd9irQLiIHgTeBrxlj2oFkYLN1/FHgDuv5EJE/E5FGfCOOvSLyWLRvTqkLhb848HhQxbK/y+lE2l6HE69po6auATJT7GRPYl/ujy4p4Rdf+gg5aclhzynPS2fBnKyYB4S9DV3MK8ggNyM58LoXwpRRRP+vGGNexpcLCD72YND3BrjP+go+x4VvpVGo5/wu8N0JXq9Ss5J/pVF9Ww9LK3yrvc7XIEx9A59LKnIpz/OtNvrM6tit+2hy9lOcmxbTLUmvqC3iqa0ncQ15JtQBdiL2Nnayqup8Urvckc7Okx0xea140kplpWaAeQUZiAzvetrY0U+yXaJSQS4i3LSslLePttHVH7tpo6YuV8xboFx5UREDbm+gZXu0tXYPcKbLxSUV55dhl+el02S1EZnJNCAoNQOkJdupcKQPW3racK6Psrx07CE2qp8M/7TR6zEsUmvqcsW8hfq66nxSk2wxa2Ph75e0rCIvcKwsLx2319DS7YrJa8aLBgSlZojqwqxhS08bOqa+5DRY8LRRLHi9hpbugZiPENKS7ayrKYhZG4u9jV3YBC4uzwkcC9QizPA8ggYEpWYIf9dT/xr7xnN9U15yGsy32qiEzUdaYzJt1NY7gNtrJrzkdDKuqC3kaEtPTIrF9jZ2Ujsnm4yU8ynYirwLozhNA4JSM8T8okz6Bj00OwfoG3TT3jsYlYRyMP+00a9jMG3k3wchHl1zr7SWn0Z70xxjTKBCOVhZFALCud7BQH1DomhAUGqG8Lchr2/tCWp7Hb0RAvjaP5TnxaZIrSmwdWZ0rzmUBXOyKMtNi3obi9Od/bT3DrJsbt6w45mpSeRlJE9pyuh7bx7lk997l2Zn4vIQGhCUmiH8XU+PtfUGahAmsw/CWCY7bXSqvY+v/Hg33a7wj/FXKRfnxr6NuohwRV0R7xxrC7mPxGTtbfR9gr+kYnSjx/K89CmNEBo6+hj0eHn8neOTfo6p0oCg1AxRkpNGerKd4629gRHCZPZBGM9kpo0efGE/P991mjfH+ETe1OUiySYUZsZnX40r64rodrnZ3RC9XdT2NHaSYrdxUUn2qN+V56VPqX1Fk9X47+n3TuEcI7DGkgYEpWYIm02oLsykvq2HhnN9pCXbKMqK/purf9ropzsbImoS9+YHLfzWCgRjrf3375Rmi9Iy2fGsX1CICLx9NHrLT/c2dLGwNDvk9p/lDl+18mQb67U4XSwuzaF7wM0zW09N9VInRQOCUjNIdVEm9a29NHT0UeHIiEnFr4hw9+XVvFd/jme2NYx57qDbyz/88iA1hZlcXlvIe8fGCAhOV1x33ctNT+ai4uyojRC8XsP+06MTyn7leen0DnomtULLYy3JvWbhHDYsKODxd44z4I7/hjsaEJSaQeYXZtLY0cex1t6oJ5SDff6yKi6vLeQffnkw5E5tfk++e4L6tl7+9mOLubKuiPq23rBJ0SanKy4J5WArKvPYdaozKu2w69t66R5wDytIC+Zvgx1qq9PxtPcM4PEainNS+cMr5tPsHOAXu8+M/8Ao04Cg1AxSU5SF18DRlp6oFqWNZLMJ//zpS0hJsvHlH+9mKERitrV7gO++cYSrLyri6oVzuLSmAIAtIUYJxpjAlFE8rZjroKt/aMygFil/hfIl4QKCFaAnk0dotvIHxTlpXF5byKLSHB7dVI83zq0wNCAoNYP4VxpBbBLKwUpy0/jG7y5lT0Mn//6bo6N+/39e/ZD+IQ9/8zFf/8pFpTnkpCWFzCN0D7jpG/TEpSgt2IpK35v3rlNTnzba29hFRoqdBXOyQv6+fAq1CP4VWCVW4797r6zhaEsPv/mgZZxHRpcGBKVmkOrC8wEh2ktOQ7lpWSmfXFHOv795lPdPne/mua+xi5/sbOCuDVXMtzqx2m3CupoCtoQICIGitDgHhPlFWWSnJrGrYeqdSPc2dnJxWW7Y3lH5mSmkJdsmVYsQWJJrjaBuXFpKeV463990bPIXPAkaEJSaQbLTkinK9iVmYzllFOzvb1lCSU4aX/nxbnoH3Bhj+PsXD1CQmcKfXls77NzLago42d43atqkaZJbZ06VzSYst/IIUzHk8XLgjDNsQhl8yfiySdYiNHe5sNuEQmvVWLLdxhcvr2b7iQ52njw36eueKA0ISs0w/s1yYj1l5JeTlsy/fPYSTp3r4x9fOsgLe86w82QHX/udi0ZtVhMuj3C+Sjm+AQFgxdw8Pmjqpm/QPf7JYRxu7mbA7R1VoTzSZGsRmp0uirJSh40+bl0zl7yMZL7/Vv2En2+yNCAoNcMsKs2hIDOF3PTwO4dF27qaAu65ooZntjXwN8/vZ2l5Lp9ZNXojnYUl2TgykkdNG/mnRObEcdmp34pKBx6vYV/j5PoEuT1efrDJ96a8PExC2a/CMbkRQqgluRkpSXz+0nm8fqg55ntE+2lAUGqG+cp1dfzk3stiuutYKPddX+crnHK5+fubF4csMLPZhHXVBaMSy2e7XBRkpoQs6Iq15dan+l2TqEfoH/Rw73/v5PndZ/jKdXVUFow9TVeWm05bzyCuoYnVEDQ7Q6/A+vz6KlLsNh7bHJ9RggYEpWaY3IzkQCI3nlKT7Dxx1xqeuGsNq+blhz3vsvkFNHb0B/otQfg3vHhwZKZQXZjJrlMTSyx39Q3x+ce38sYHLfzDLUv48+tqx31MYF+ECY4Smp0DITcOKsxK5TOrK/jZ+6dpiUPTu4nvdK2UmrXm5KQxZ5w39kAeob490J47HjuljWXF3Dw2H23DGBPRyKrZ6eLzP9xGfVsP/3b7Cj62rCyi1/EvPT3T2R9x0HYN+aqbwwXMuy+vocflZigONQk6QlBKRVVdcRYFmSnD2lg0ORMcECrzAnshj6e+tYdPfu9dGjv6eOKutREHA5jczmnj7RMxryCTf71tRSDYxJIGBKVUVIkIl9b48gjGGFxDHs71DsZ9yWmwFZUOgHGnjQ6c6eIz/7kF15CHZ++5jA0LCif0OsU5adhkYlNGgaK0BP7v46cBQSkVdZfOL+BMl4tT5/posdoyJHKEcFFJNmnJtnHrER78xQHsNuGn917G0jFqDsJJttsoyUmb0AihOVClHP8VWCNFFBBEZKOIfCgiR0Xk/jDnfFZEDorIARF5Ouj4t0Rkv/V1a9DxahHZaj3nj0UkZeq3o5SaDi4LqkeYDp+Ak+02lpXnjTlC2NfYxc6THdx75XxqppC0L5/g0tPmwJLcGTBCEBE78AhwA7AYuF1EFo84pxZ4ANhgjFkCfNk6fhOwElgOrAO+KiI51sO+BXzbGLMA6AC+EJU7Ukol3PyiTIqyU9lS387ZLt+bYyKK0oKtqMxj/xln2LbST7x7gowUO59eXTGl15nozmlNXQNkpNjJTk38Gp9IRghrgaPGmHpjzCDwLHDLiHPuBh4xxnQAGGP8HZkWA5uMMW5jTC+wF9govjT/NcBz1nlPAp+Y2q0opaaL4DxCszMxfYxGWlGZx6Dby6Gz3aN+194zwIt7z/CplRWjqq8nqiwvnaYuF54IVwU1d7soyUmLe11JKJEEhHIgeJeMRutYsDqgTkTeEZH3RGSjdXwPvgCQISKFwNXAXKAA6DTGuMd4TqXUDHZZTQHNzgG2HGufFp+Ax0osP7u9gUG3lzvXz5vy65Q70nF7Tdh9IUZq7nIlpII7lGgllZOAWuAq4HbgByKSZ4x5DXgZeBd4BtgCTKiET0TuEZEdIrKjtTX8fq1Kqenlsvm+PMKmI22Bts6JVJyTRllu2qjE8pDHy39tOcnltYUsmDN6r+SJCq5FiEST0zUtVhhBZAHhNL5P9X4V1rFgjcALxpghY8xx4DC+AIEx5mFjzHJjzPWAWL9rB/JEJGmM58R6/KPGmNXGmNVFRUWR3pdSKsGqCjIozknF4zXT5g1vRaVjVCvs1w400+R0cedlVVF5jYoJVCsbY2hxDiR8Os0vkoCwHai1VgWlALcBL4w453l8owOsqaE6oF5E7CJSYB1fBiwDXjO+/ezeBD5tPf5O4BdTvBel1DQiIoHVRolcchpsRWUeDef6ae0eCBx78t0TzM1P5+qFc6LyGmUT2Eqzo2+IQY+X4uzp8b/PuAHBmuf/EvAqcAj4iTHmgIg8JCI3W6e9CrSLyEF8b/RfM8a0A8nAZuv4o8AdQXmDvwTuE5Gj+HIKP4zmjSmlEs8/bTR9Rgi+Rne7rUZ3B850se3EOe68rCrsxjcTlZGShCMjOaIRQmCfiGkSMCPK8hhjXsaXCwg+9mDQ9wa4z/oKPseFb6VRqOesx7eCSSl1gVo/vxCbQFVB5vgnx8GSslyS7cKuUx1cv7iYJ989QXqync+sHt3KeyrKHZHti9DsHLttRbwlfuGrUuqCNTc/g1e+fMW0CQhpyXYWl+aw61Qn53oH+cXuM3xqVUXU95Yoy03neFvvuOc1O6fXCEFbVyilYqquOJuUpOnzVrOi0sGexk6e3nqSAbc3asnkYP5qZd/kSXj+Ku6irAtr2alSSs0IKyrz6Bv08L3fHuOymgIuKpn6UtORyvPS6Rv00Nk3NOZ5zU4XhVkp0yZgTo+rUEqpOFkx11eg1jfo4Q82VMXkNSJdetrUlbiNg0LRgKCUmlXm5qdTkJlCeV461y0qjslr+JeejhcQmp0D0yogaFJZKTWriAgP/+5SctOTo7bUdCR/tfJ4bbCbnS4usfZ8ng40ICilZp2NF5fE9PnzM1PITkviaGtP2HMG3B7aE7xx0Eg6ZaSUUlEmIqysdLDzRPj9F/zV0sXTpLEdaEBQSqmYWFPl4MPmbrrCrDSaLm3Bg2lAUEqpGFg1Lx+A98Ps0tbUZW0tqlNGSil1YVs+N48km7D9xLmQv58OW4uOpAFBKaViID3FzsXluewIk0docbpISbKRlxHdthlToQFBKaViZPU8B7sbO0Pu49zkdFGck5rwjYOCaUBQSqkYWV2Vz6Dby/7TzlG/a+qaPjul+WlAUEqpGFld5WuTsSNEHqGle4A5GhCUUmp2KMxKpbowk+0j8gjGGB0hKKXUbLN6noOdJ88Na4XtdLnpH/JoQFBKqdlkTVU+HX1DHGs9v2GOvyhtzjSqUgYNCEopFVOh8gjN07AGATQgKKVUTFUXZlKQmcKOk+fzCE1d02vrTD8NCEopFUMiwqp5jpAjhOm0FwJoQFBKqZhbU5XPifa+QIfTZucAuenJpCXbE3xlw2lAUEqpGFtl5RF2nvSNEpqc02/JKUQYEERko4h8KCJHReT+MOd8VkQOisgBEXk66Pg/WccOich3xarTFpFbRWSv9btvRed2lFJq+rm4LJfUJFugHqHZ6ZpWba/9xg0IImIHHgFuABYDt4vI4hHn1AIPABuMMUuAL1vH1wMbgGXAxcAa4EoRKQD+GbjWOr9ERK6N2l0ppdQ0kpJkY/ncvEAeoanLRXH29FpyCpGNENYCR40x9caYQeBZ4JYR59wNPGKM6QAwxrRYxw2QBqQAqUAy0AzUAEeMMa3Web8GPjWVG1FKqelsdZWDA2ecdLuGaOsZmHYrjCCygFAONAT93GgdC1YH1InIOyLynohsBDDGbAHeBM5aX68aYw4BR4GLRKRKRJKATwBzQ724iNwjIjtEZEdra2uoU5RSatpbXZWP22v49aFmvGb6rTCC6CWVk4Ba4CrgduAHIpInIguARUAFviByjYhcbo0k/gj4MbAZOAGM7g8LGGMeNcasNsasLioqitLlKqVUfK2sdCACL+09C0zPgJAUwTmnGf7pvcI6FqwR2GqMGQKOi8hhzgeI94wxPQAi8ivgMmCzMeZF4EXr+D2ECQhKKXUhyE1P5qLibDYdbgOmX5UyRDZC2A7Uiki1iKQAtwEvjDjneXxv/ohIIb4ppHrgFL4kcpKIJANXAoes8+ZY/zqAPwYem/LdKKXUNLa6ysGgxwtAce4MTCobY9zAl4BX8b2Z/8QYc0BEHhKRm63TXgXaReQgvpzB14wx7cBzwDFgH7AH2GONDAC+Y53/DvBNY8zhaN6YUkpNN2uq8gGw24TCzOkXECKZMsIY8zLw8ohjDwZ9b4D7rK/gczzAH4Z5ztsnerFKKTWTrbYCwpzsVGy26bN1pp9WKiulVJyU56VTmps2LRPKEOEIQSmlVHT81Y2LSEmanp/FNSAopVQcffySskRfQljTM0wppZSKOw0ISimlAA0ISimlLBoQlFJKARoQlFJKWTQgKKWUAjQgKKWUsmhAUEopBYD42hDNDCLSCpyc5MMLgbYoXs5Mofc9u8zW+4bZe++R3Pc8Y8y4G8rMqIAwFSKywxizOtHXEW9637PLbL1vmL33Hs371ikjpZRSgAYEpZRSltkUEB5N9AUkiN737DJb7xtm771H7b5nTQ5BKaXU2GbTCEEppdQYZkVAEJGNIvKhiBwVkfsTfT2xIiKPi0iLiOwPOpYvIq+LyBHrX0cirzEWRGSuiLwpIgdF5ICI/Ll1/IK+dxFJE5FtIrLHuu+vW8erRWSr9ff+YxFJSfS1xoKI2EVkl4j80vr5gr9vETkhIvtEZLeI7LCORe3v/IIPCCJiBx4BbgAWA7eLyOLEXlXMPAFsHHHsfuANY0wt8Ib184XGDfx/xpjFwKXAn1j/H1/o9z4AXGOMuQRYDmwUkUuBbwHfNsYsADqALyTwGmPpz4FDQT/Plvu+2hizPGipadT+zi/4gACsBY4aY+qNMYPAs8AtCb6mmDDGbALOjTh8C/Ck9f2TwCfielFxYIw5a4x53/q+G9+bRDkX+L0bnx7rx2TrywDXAM9Zxy+4+wYQkQrgJuAx62dhFtx3GFH7O58NAaEcaAj6udE6NlsUG2POWt83AcWJvJhYE5EqYAWwlVlw79a0yW6gBXgdOAZ0GmPc1ikX6t/7vwJ/AXitnwuYHfdtgNdEZKeI3GMdi9rfue6pPIsYY4yIXLDLykQkC/gf4MvGGKfvQ6PPhXrvxhgPsFxE8oCfAwsTfEkxJyIfA1qMMTtF5KpEX0+cfcQYc1pE5gCvi8gHwb+c6t/5bBghnAbmBv1cYR2bLZpFpBTA+rclwdcTEyKSjC8YPGWM+Zl1eFbcO4AxphN4E7gMyBMR/4e9C/HvfQNws4icwDcFfA3wHS78+8YYc9r6twXfB4C1ROKV7FUAAAEpSURBVPHvfDYEhO1ArbUCIQW4DXghwdcUTy8Ad1rf3wn8IoHXEhPW/PEPgUPGmH8J+tUFfe8iUmSNDBCRdOB6fPmTN4FPW6ddcPdtjHnAGFNhjKnC99/zb4wxv88Fft8ikiki2f7vgY8C+4ni3/msKEwTkRvxzTnagceNMQ8n+JJiQkSeAa7C1/2wGfg74HngJ0Alvk6xnzXGjEw8z2gi8hFgM7CP83PKf4Uvj3DB3ruILMOXRLTj+3D3E2PMQyJSg++Tcz6wC7jDGDOQuCuNHWvK6KvGmI9d6Pdt3d/PrR+TgKeNMQ+LSAFR+jufFQFBKaXU+GbDlJFSSqkIaEBQSikFaEBQSill0YCglFIK0ICglFLKogFBKaUUoAFBKaWURQOCUkopAP4fHE6X1An7G6cAAAAASUVORK5CYII=\n", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "\n", + "import matplotlib.pyplot as plt\n", + "import matplotlib.ticker as ticker\n", + "\n", + "plt.figure()\n", + "plt.plot(all_losses)" + ] + }, + { + "cell_type": "code", + "execution_count": 235, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(3m 30s) logloss=14.94 \t accuracy=0.57\n" + ] + } + ], + "source": [ + "# 看看测试集合的正确率多高, 发现效果很差,说明一点。。(数据不均匀)\n", + "\n", + "out_t = cnn(Variable(te_x))\n", + "\n", + "# softmax 用来计算输出分类的概率,然后max是选出最大的一组:(概率值,分类值)\n", + "prediction_t = torch.max(F.softmax(out_t, dim=1), 1)[1]\n", + "pred_t_y = prediction_t.data.numpy().squeeze()\n", + "target_t_y = Variable(te_y).data.numpy()\n", + "logloss_t = log_loss(target_t_y, pred_t_y, eps=1e-15)\n", + "accuracy_t = sum(pred_t_y == target_t_y)/len(target_t_y) # 预测中有多少和真实值一样\n", + "print('(%s) logloss=%.2f \\t accuracy=%.2f' % (timeSince(start), logloss_t, accuracy_t))" + ] + }, + { + "cell_type": "code", + "execution_count": 236, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "(3m 49s) logloss=14.25 \t accuracy=0.59\n" + ] + } + ], + "source": [ + "# 又回头看看训练集合的正确率多高,严重说明:数据在采样的时候不均匀,倒是数据有丢失没学习到\n", + "\n", + "out_t = cnn(Variable(tr_x[:10000]))\n", + "\n", + "# softmax 用来计算输出分类的概率,然后max是选出最大的一组:(概率值,分类值)\n", + "prediction_t = torch.max(F.softmax(out_t, dim=1), 1)[1]\n", + "pred_t_y = prediction_t.data.numpy().squeeze()\n", + "target_t_y = Variable(tr_y[:10000]).data.numpy()\n", + "logloss_t = log_loss(target_t_y, pred_t_y, eps=1e-15)\n", + "accuracy_t = sum(pred_t_y == target_t_y)/len(target_t_y) # 预测中有多少和真实值一样\n", + "print('(%s) logloss=%.2f \\t accuracy=%.2f' % (timeSince(start), logloss_t, accuracy_t))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* * * \n", + "\n", + "其他的信息" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 加载训练好的词向量\n", + "\n", + "from gensim.models.word2vec import Word2Vec\n", + "\n", + "model = Word2Vec.load_word2vec_format(\"vector.txt\", binary=False) # C text format\n", + "# model = Word2Vec.load_word2vec_format(\"vector.bin\", binary=True) # C" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 加载 google 的词向量,查看单词之间关系\n", + "\n", + "from gensim.models.word2vec import Word2Vec \n", + "model = Word2Vec.load_word2vec_format(\"GoogleNews-vectors-negative300.bin\", binary=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 测试预测效果\n", + "\n", + "print(model.most_similar(positive=[\"woman\", \"king\"], negative=[\"man\"], topn=5))\n", + "print(model.most_similar(positive=[\"biggest\", \"small\"], negative=[\"big\"], topn=5))\n", + "print(model.most_similar(positive=[\"ate\", \"speak\"], negative=[\"eat\"], topn=5))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "with open(\"food_words.txt\", \"r\") as infile:\n", + " food_words = infile.readlines()\n", + " \n", + "with open(\"sports_words.txt\", \"r\") as infile:\n", + " food_words = infile.readlines()\n", + " \n", + "with open(\"weather_words.txt\", \"r\") as infile:\n", + " food_words = infile.readlines()\n", + " \n", + "def getWordVecs(words):\n", + " vec = []\n", + " for word in words:\n", + " word = word.replace(\"\\n\", \"\")\n", + " try:\n", + " vecs.append(model[word].reshape((1, 300)))\n", + " except KeyError:\n", + " continue\n", + " \n", + " # numpy提供了numpy.concatenate((a1,a2,...), axis=0)函数。能够一次完成多个数组的拼接\n", + " \"\"\"\n", + " >>> a=np.array([1,2,3])\n", + " >>> b=np.array([11,22,33])\n", + " >>> c=np.array([44,55,66])\n", + " >>> np.concatenate((a,b,c),axis=0) # 默认情况下,axis=0可以不写\n", + " array([ 1, 2, 3, 11, 22, 33, 44, 55, 66]) #对于一维数组拼接,axis的值不影响最后的结果\n", + " \"\"\"\n", + " vecs = np.concatenate(vecs)\n", + " return np.array(vecs, dtype=\"float\")\n", + "\n", + "food_vecs = getWordVecs(food_words)\n", + "sports_vecs = getWordVecs(sports_words)\n", + "weather_vecs = getWordVecs(weather_words)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 利用 TSNE 和 matplotlib 对分类结果进行可视化处理\n", + "\n", + "from sklearn.manifold import TSEN\n", + "import matplotlib.pyplot as plt\n", + "\n", + "ts = TSEN(2)\n", + "reduced_vecs = ts.fit_transform(np.concatenate((food_vecs, sports_vecs, weather_vecs)))\n", + "\n", + "for i in range(len(reduced_vecs)):\n", + " if i < len(food_vecs):\n", + " color = \"b\"\n", + " elif i >= len(food_vecs) and i <(len(food_vecs)+len(sports_vecs)):\n", + " color = \"r\"\n", + " else:\n", + " color = \"g\"\n", + " \n", + " plt.plot(reduced_vecs[i, 0], reduced_vecs[i, 1], marker=\"0\", color=color, marksize=8)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 首先,我们导入数据并构建 Word2Vec 模型:\n", + "\n", + "from sklearn.cross_validation import train_ _test_ _split\n", + "from gensim.models.word2vec import Word2Vec\n", + "\n", + "with open('twitter.data/pos_ tweets.txt', 'r') as infile:\n", + " pos_tweets= infile.readlines()\n", + "\n", + "with open(' twitter_ data/neg_ tweets.txt', 'r') as infile:\n", + " neg_ _tweets = infile.readlines()\n", + "\n", + "# use 1for positive sentiment,0 for negative\n", + "Y= np.concatenate((np.ones( len (pos_tweets )) ,np.zeros(len(neg_tweets))))\n", + "\n", + "x_train,x_test,y_train,y_test = train_test_split(np.concatenate((pos_tweets, neg_tweets)), y, test_size=0.2)\n", + "# Do some very minor text preprocessing\n", + "\n", + "def cleanText(corpus):\n", + " corpus= [z.lower( ).replace(' \\n' , '').split() for z in corpus]\n", + " return corpus\n", + "\n", + "x_ train= cleanText(x_ train)\n", + "x_ test= cleanText (x_ _test)\n", + "\n", + "n _dim= 300\n", + "#Initialize model and build vocab\n", + "imdb_w2v= Word2Vec(size=n dim, min_count=10)\n", + "imdb_w2v.build_vocab(x_ _train)\n", + "#Train the model over train_ _reviews (this may take several minutes)\n", + "imdb_w2v.train( x_train)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 接下来,为了利用下面的函数获得推文中所有词向量的平均值,我们必须构建作为输入文本的词向量。\n", + "\n", + "def buildWordVector(text, size):\n", + " vec = np.zeros(size).reshape((1,size))\n", + " count= 0.\n", + "\n", + " for word in text :\n", + " try:\n", + " vec += imdb_w2v[word].reshape( (1,size) )\n", + " count += 1.\n", + " except KeyError:\n", + " continue\n", + " if count != 0:\n", + " vec 1'= count\n", + " return vec" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 调整数据集的量纲是数据标准化处理的一部分,我们通常将数据集转化成服从均值为零的高斯分布,这说明数值大于均值表示乐观,反之则表示悲观。为了使模型更有效,许多机器学习模型需要预先处理数据集的量纲,特别是文本分类器这类具有许多变量的模型。\n", + "\n", + "from sklearn.preprocessing import scale\n", + "\n", + "train_vecs = np.concatenate([buildWordVector(z ,n_dim) for z in x_train])\n", + "train_vecs= scale(train_vecs)\n", + "\n", + "# Train word2vec on test tweets\n", + "imdb_w2v.train(x_test)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 最后我们需要建立测试集向量并对其标准化处理:\n", + "\n", + "#Build test tweet vectors then scale\n", + "test_vecs = np.concatenate( [buildWordVector( Z,n _dim) for z in x _test ])\n", + "test_vecs = scale(test_vecs)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\"\"\"\n", + "接下来我们想要通过计算测试集的预测精度和 ROC 曲线来验证分类器的有效性。 ROC 曲线衡量当模型参数调整的时候,其真阳性率和假阳性率的变化情况。在我们的案例中,我们调整的是分类器模型截断阈值的概率。一般来说,ROC 曲线下的面积(AUC)越大,该模型的表现越好。你可以在这里找到更多关于 ROC 曲线的资料\n", + "\n", + "(https://en.wikipedia.org/wiki/Receiver_operating_characteristic)\n", + "\n", + "在这个案例中我们使用罗吉斯回归的随机梯度下降法作为分类器算法。\n", + "\"\"\"\n", + "\n", + "#Use classification algorithm (i.e.Stochastic Logistic Regression) on training set, then assess model performance on test set\n", + "\n", + "from sklearn.linear model import SGDClassifier\n", + "lr = SGDClassifier(loss='log' ,penalty='11' )\n", + "lr.fit(train_vecs, y_train)\n", + "print' Test Accuracy: %.2f' % r.score(test vecs, y_test )\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 随后我们利用 matplotlib 和 metric 库来构建 ROC 曲线\n", + "\n", + "#Crea t e ROC curve\n", + "from sklearn.metrics import roc_curve, auc\n", + "import matplotlib.pyplot as plt\n", + "\n", + "pred_probas = lr.predict_proba(test_vecs)[:, 1]\n", + "\n", + "fpr, tpr, _ = roc_curve(y_test, pred_probas )\n", + "roc_auc = auc(fpr, tpr)\n", + "\n", + "plt.plot(fpr,tpr,label='area = %.2f' % roc_ auc)\n", + "plt.plot([0,1],[0,1],'k--')\n", + "plt. xlim( [0. 0 ,1. 0 ])\n", + "plt.ylim([0.0, 1.05])\n", + "plt.legend(loc='lower right')\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.3" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/src/python/getting-started/digit-recognizer/cnn_pytorch-python3.6.py b/src/python/getting-started/digit-recognizer/cnn_pytorch-python3.6.py index f3278fd..3f7ed41 100644 --- a/src/python/getting-started/digit-recognizer/cnn_pytorch-python3.6.py +++ b/src/python/getting-started/digit-recognizer/cnn_pytorch-python3.6.py @@ -20,7 +20,7 @@ from torch.utils.data import Dataset, DataLoader import os.path # 数据路径 -data_dir = '/media/wsw/B634091A3408DF6D/data/kaggle/datasets/getting-started/digit-recognizer/' +data_dir = '/opt/data/kaggle/getting-started/digit-recognizer/' class CustomedDataSet(Dataset): def __init__(self, train=True):