利用sklearn做文本分类(特征提取、knnsvm聚类)数据挖掘入门与实战公众号:datadw分为以下几个过程:加载数据集提feature分类Naive BayesKNNSVM聚类20newsgroups官网/~jason/20Newsgroups/上给出了3个数据集,这里我们用最原始的20news-19997.tar.gz/~jason/20Newsgroups/20news-19997.ta r.gz1.加载数据集从20news-19997.tar.gz下载数据集,解压到scikit_learn_data文件夹下,加载数据,详见code注释。
[python]view plaincopy#first extract the 20 news_group dataset to/scikit_learn_datafromsklearn.datasets importfetch_20newsgroups#all categories#newsgroup_train = fetch_20newsgroups(subset='train') #part categoriescategories = ['comp.graphics','comp.os.ms-windows.misc','comp.sys.ibm.pc.hardware','comp.sys.mac.hardware','comp.windows.x'];newsgroup_train = fetch_20newsgroups(subset ='train',categories = categories);可以检验是否load好了:[python]view plaincopy#print category namesfrompprint importpprintpprint(list(newsgroup_train.target_names))结果:['comp.graphics','comp.os.ms-windows.misc','comp.sys.ibm.pc.hardware','comp.sys.mac.hardware','comp.windows.x']2. 提feature:刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform Method 1. HashingVectorizer,规定feature个数[python]view plaincopy#newsgroup_train.data is the original documents, but we need to extract the#feature vectors inorder to model the text data fromsklearn.feature_extraction.text importHashingVectorizervectorizer = HashingVectorizer(stop_words ='english',non_negative = True,n_features = 10000)fea_train = vectorizer.fit_transform(newsgroup_train.data) fea_test = vectorizer.fit_transform(newsgroups_test.data);#return feature vector 'fea_train' [n_samples,n_features] print'Size of fea_train:'+ repr(fea_train.shape)print'Size of fea_train:'+ repr(fea_test.shape)#11314 documents, 130107 vectors for all categories print'The average feature sparsity is {0:.3f}%'.format(fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*1 00);结果:Size of fea_train:(2936, 10000)Size of fea_train:(1955, 10000)The average feature sparsity is 1.002%因为我们只取了10000个词,即10000维feature,稀疏度还不算低。
而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。
****************************************************************** ********************************************************上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:Method 2. CountVectorizer+TfidfTransformer让两个CountVectorizer共享vocabulary:[python]view plaincopy#----------------------------------------------------#method 1:CountVectorizer+TfidfTransformerprint'*************************nCountVectorizer+TfidfTransfor mern*************************'fromsklearn.feature_extraction.text importCountVectorizer,TfidfTransformercount_v1= CountVectorizer(stop_words = 'english', max_df = 0.5);counts_train =count_v1.fit_transform(newsgroup_train.data);print"the shape of train is "+repr(counts_train.shape)count_v2 =CountVectorizer(vocabulary=count_v1.vocabulary_); counts_test =count_v2.fit_transform(newsgroups_test.data);print"the shape of test is "+repr(counts_test.shape)tfidftransformer = TfidfTransformer();tfidf_train =tfidftransformer.fit(counts_train).transform(counts_train); tfidf_test =tfidftransformer.fit(counts_test).transform(counts_test);结果:*************************CountVectorizer+TfidfTransformer*************************the shape of train is (2936, 66433)the shape of test is (1955, 66433)Method 3. TfidfVectorizer让两个TfidfVectorizer共享vocabulary:[python]view plaincopy#method 2:TfidfVectorizerprint'*************************nTfidfVectorizern**************** *********'fromsklearn.feature_extraction.text importTfidfVectorizer tv = TfidfVectorizer(sublinear_tf = True,max_df = 0.5,stop_words = 'english');tfidf_train_2 = tv.fit_transform(newsgroup_train.data);tv2 = TfidfVectorizer(vocabulary = tv.vocabulary_);tfidf_test_2 = tv2.fit_transform(newsgroups_test.data); print"the shape of train is "+repr(tfidf_train_2.shape) print"the shape of test is "+repr(tfidf_test_2.shape) analyze = tv.build_analyzer()tv.get_feature_names()#statistical features/terms结果:*************************TfidfVectorizer*************************the shape of train is (2936, 66433)the shape of test is (1955, 66433)此外,还有sklearn里封装好的抓feature函数,fetch_20newsgroups_vectorizedMethod 4. fetch_20newsgroups_vectorized但是这种方法不能挑出几个类的feature,只能全部20个类的feature全部弄出来:[python]view plaincopyprint'*************************nfetch_20newsgroups_vectoriz edn*************************'fromsklearn.datasetsimportfetch_20newsgroups_vectorizedtfidf_train_3 = fetch_20newsgroups_vectorized(subset = 'train');tfidf_test_3 = fetch_20newsgroups_vectorized(subset = 'test');print"the shape of train is "+repr(tfidf_train_3.data.shape) print"the shape of test is "+repr(tfidf_test_3.data.shape)结果:*************************fetch_20newsgroups_vectorized*************************the shape of train is (11314, 130107)the shape of test is (7532, 130107)3. 分类3.1 Multinomial Naive Bayes Classifier[python]view plaincopy############################################## #########Multinomial Naive Bayes Classifierprint'*************************nNaiveBayesn*************************'fromsklearn.naive_bayes importMultinomialNB fromsklearn importmetricsnewsgroups_test = fetch_20newsgroups(subset = 'test', categories = categories);fea_test = vectorizer.fit_transform(newsgroups_test.data); #create the Multinomial Naive Bayesian Classifierclf = MultinomialNB(alpha = 0.01)clf.fit(fea_train,newsgroup_train.target);pred = clf.predict(fea_test);calculate_result(newsgroups_test.target,pred);#notice here we can see that f1_score is not equal to2*precision*recall/(precision+recall)#because the m_precision and m_recall we get is averaged, however, metrics.f1_score() calculates#weithed average, i.e., takes into the number of each class into consideration.注意我最后的3行注释,为什么f1≠2*(准确率*召回率)/(准确率+召回率)其中,函数calculate_result计算f1:[python]view plaincopydefcalculate_result(actual,pred):m_precision = metrics.precision_score(actual,pred);m_recall = metrics.recall_score(actual,pred);print'predict info:'print'precision:{0:.3f}'.format(m_precision)print'recall:{0:0.3f}'.format(m_recall);print'f1-score:{0:.3f}'.format(metrics.f1_score(actual,pred));3.2 KNN:[python]view plaincopy############################################## #########KNN Classifierfromsklearn.neighbors importKNeighborsClassifierprint'*************************nKNNn*************************'knnclf = KNeighborsClassifier()#default with k=5 knnclf.fit(fea_train,newsgroup_train.target)pred = knnclf.predict(fea_test);calculate_result(newsgroups_test.target,pred);3.3 SVM:[cpp]view plaincopy############################################## #########SVM Classifierfrom sklearn.svm import SVCprint '*************************nSVMn*************************' svclf = SVC(kernel = 'linear')#defaultwith 'rbf'svclf.fit(fea_train,newsgroup_train.target)pred = svclf.predict(fea_test);calculate_result(newsgroups_test.target,pred);结果:*************************Naive Bayes*************************predict info:precision:0.764recall:0.759f1-score:0.760*************************KNN*************************predict info:precision:0.642recall:0.635f1-score:0.636*************************SVM*************************predict info:precision:0.777recall:0.774f1-score:0.7744. 聚类[cpp]view plaincopy#######################################################KMeans Clusterfrom sklearn.cluster import KMeansprint'*************************nKMeansn*************************' pred = KMeans(n_clusters=5)pred.fit(fea_test)calculate_result(newsgroups_test.target,bels_);结果:*************************KMeans*************************predict info:precision:0.264recall:0.226f1-score:0.213本文全部代码回复“sk”获取。