天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

scikit-learn python_zouxy09的專欄

發(fā)布時(shí)間:2016-09-16 13:05

  本文關(guān)鍵詞:scikit-learn,由筆耕文化傳播整理發(fā)布。


Python機(jī)器學(xué)習(xí)庫(kù)scikit-learn實(shí)踐

zouxy09@qq.com

 

一、概述

       機(jī)器學(xué)習(xí)算法在近幾年大數(shù)據(jù)點(diǎn)燃的熱火熏陶下已經(jīng)變得被人所“熟知”,就算不懂得其中各算法理論,叫你喊上一兩個(gè)著名算法的名字,你也能昂首挺胸脫口而出。當(dāng)然了,算法之林雖大,但能者還是有限,能適應(yīng)某些環(huán)境并取得較好效果的算法會(huì)脫穎而出,而表現(xiàn)平平者則被歷史所淡忘。隨著機(jī)器學(xué)習(xí)社區(qū)的發(fā)展和實(shí)踐驗(yàn)證,這群脫穎而出者也逐漸被人所認(rèn)可和青睞,同時(shí)獲得了更多社區(qū)力量的支持、改進(jìn)和推廣。

       以最廣泛的分類算法為例,大致可以分為線性和非線性兩大派別。線性算法有著名的邏輯回歸、樸素貝葉斯、最大熵等,非線性算法有隨機(jī)森林、決策樹(shù)、神經(jīng)網(wǎng)絡(luò)、核機(jī)器等等。線性算法舉的大旗是訓(xùn)練和預(yù)測(cè)的效率比較高,但最終效果對(duì)特征的依賴程度較高,需要數(shù)據(jù)在特征層面上是線性可分的。因此,使用線性算法需要在特征工程上下不少功夫,盡量對(duì)特征進(jìn)行選擇、變換或者組合等使得特征具有區(qū)分性。而非線性算法則牛逼點(diǎn),可以建模復(fù)雜的分類面,從而能更好的擬合數(shù)據(jù)。

       那在我們選擇了特征的基礎(chǔ)上,哪個(gè)機(jī)器學(xué)習(xí)算法能取得更好的效果呢?誰(shuí)也不知道。實(shí)踐是檢驗(yàn)?zāi)膫(gè)好的不二標(biāo)準(zhǔn)。那難道要苦逼到寫(xiě)五六個(gè)機(jī)器學(xué)習(xí)的代碼嗎?No,機(jī)器學(xué)習(xí)社區(qū)的力量是強(qiáng)大的,碼農(nóng)界的共識(shí)是不重復(fù)造輪子!因此,對(duì)某些較為成熟的算法,總有某些優(yōu)秀的庫(kù)可以直接使用,省去了大伙調(diào)研的大部分時(shí)間。

       基于目前使用python較多,而python界中遠(yuǎn)近聞名的機(jī)器學(xué)習(xí)庫(kù)要數(shù)scikit-learn莫屬了。這個(gè)庫(kù)優(yōu)點(diǎn)很多。簡(jiǎn)單易用,接口抽象得非常好,而且文檔支持實(shí)在感人。本文中,我們可以封裝其中的很多機(jī)器學(xué)習(xí)算法,然后進(jìn)行一次性測(cè)試,從而便于分析取優(yōu)。當(dāng)然了,針對(duì)具體算法,超參調(diào)優(yōu)也非常重要。

 

二、scikit-learn的python實(shí)踐

2.1、Python的準(zhǔn)備工作

       Python一個(gè)備受歡迎的點(diǎn)是社區(qū)支持很多,有非常多優(yōu)秀的庫(kù)或者模塊。但是某些庫(kù)之間有時(shí)候也存在依賴,所以要安裝這些庫(kù)也是挺繁瑣的過(guò)程。但總有人忍受不了這種繁瑣,都會(huì)開(kāi)發(fā)出不少自動(dòng)化的工具來(lái)節(jié)省各位客官的時(shí)間。其中,個(gè)人總結(jié),安裝一個(gè)python的庫(kù)有以下三種方法:

1)Anaconda

       這是一個(gè)非常齊全的python發(fā)行版本,最新的版本提供了多達(dá)195個(gè)流行的python包,包含了我們常用的numpy、scipy等等科學(xué)計(jì)算的包。有了它,媽媽再也不用擔(dān)心我焦頭爛額地安裝一個(gè)又一個(gè)依賴包了。Anaconda在手,輕松我有!下載地址如下:

2)Pip

       使用過(guò)Ubuntu的人,對(duì)apt-get的愛(ài)只有自己懂。其實(shí)對(duì)Python的庫(kù)的下載和安裝可以借助pip工具的。需要安裝什么庫(kù),直接下載和安裝一條龍服務(wù)。在pip官網(wǎng)https://pypi.python.org/pypi/pip下載安裝即可。未來(lái)的需求就在#pip install xx 中。

3)源碼包

       如果上述兩種方法都沒(méi)有找到你的庫(kù),那你直接把庫(kù)的源碼下載回來(lái),解壓,然后在目錄中會(huì)有個(gè)setup.py文件。執(zhí)行#python setup.py install 即可把這個(gè)庫(kù)安裝到python的默認(rèn)庫(kù)目錄中。

2.2、scikit-learn的測(cè)試

       scikit-learn已經(jīng)包含在Anaconda中。也可以在官方下載源碼包進(jìn)行安裝。本文代碼里封裝了如下機(jī)器學(xué)習(xí)算法,我們修改數(shù)據(jù)加載函數(shù),即可一鍵測(cè)試:

classifiers = {'NB':naive_bayes_classifier, 'KNN':knn_classifier, 'LR':logistic_regression_classifier, 'RF':random_forest_classifier, 'DT':decision_tree_classifier, 'SVM':svm_classifier, 'SVMCV':svm_cross_validation, 'GBDT':gradient_boosting_classifier }

train_test.py

#!usr/bin/env python #-*- coding: utf-8 -*- import sys import os import time from sklearn import metrics import numpy as np import cPickle as pickle reload(sys) sys.setdefaultencoding('utf8') # Multinomial Naive Bayes Classifier def naive_bayes_classifier(train_x, train_y): from sklearn.naive_bayes import MultinomialNB model = MultinomialNB(alpha=0.01) model.fit(train_x, train_y) return model # KNN Classifier def knn_classifier(train_x, train_y): from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier() model.fit(train_x, train_y) return model # Logistic Regression Classifier def logistic_regression_classifier(train_x, train_y): from sklearn.linear_model import LogisticRegression model = LogisticRegression(penalty='l2') model.fit(train_x, train_y) return model # Random Forest Classifier def random_forest_classifier(train_x, train_y): from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=8) model.fit(train_x, train_y) return model # Decision Tree Classifier def decision_tree_classifier(train_x, train_y): from sklearn import tree model = tree.DecisionTreeClassifier() model.fit(train_x, train_y) return model # GBDT(Gradient Boosting Decision Tree) Classifier def gradient_boosting_classifier(train_x, train_y): from sklearn.ensemble import GradientBoostingClassifier model = GradientBoostingClassifier(n_estimators=200) model.fit(train_x, train_y) return model # SVM Classifier def svm_classifier(train_x, train_y): from sklearn.svm import SVC model = SVC(kernel='rbf', probability=True) model.fit(train_x, train_y) return model # SVM Classifier using cross validation def svm_cross_validation(train_x, train_y): from sklearn.grid_search import GridSearchCV from sklearn.svm import SVC model = SVC(kernel='rbf', probability=True) param_grid = {'C': [1e-3, 1e-2, 1e-1, 1, 10, 100, 1000], 'gamma': [0.001, 0.0001]} grid_search = GridSearchCV(model, param_grid, n_jobs = 1, verbose=1) grid_search.fit(train_x, train_y) best_parameters = grid_search.best_estimator_.get_params() for para, val in best_parameters.items(): print para, val model = SVC(kernel='rbf', C=best_parameters['C'], gamma=best_parameters['gamma'], probability=True) model.fit(train_x, train_y) return model def read_data(data_file): import gzip f = gzip.open(data_file, "rb") train, val, test = pickle.load(f) f.close() train_x = train[0] train_y = train[1] test_x = test[0] test_y = test[1] return train_x, train_y, test_x, test_y if __name__ == '__main__': data_file = "mnist.pkl.gz" thresh = 0.5 model_save_file = None model_save = {} test_classifiers = ['NB', 'KNN', 'LR', 'RF', 'DT', 'SVM', 'GBDT'] classifiers = {'NB':naive_bayes_classifier, 'KNN':knn_classifier, 'LR':logistic_regression_classifier, 'RF':random_forest_classifier, 'DT':decision_tree_classifier, 'SVM':svm_classifier, 'SVMCV':svm_cross_validation, 'GBDT':gradient_boosting_classifier } print 'reading training and testing data...' train_x, train_y, test_x, test_y = read_data(data_file) num_train, num_feat = train_x.shape num_test, num_feat = test_x.shape is_binary_class = (len(np.unique(train_y)) == 2) print '******************** Data Info *********************' print '#training data: %d, #testing_data: %d, dimension: %d' % (num_train, num_test, num_feat) for classifier in test_classifiers: print '******************* %s ********************' % classifier start_time = time.time() model = classifiers[classifier](train_x, train_y) print 'training took %fs!' % (time.time() - start_time) predict = model.predict(test_x) if model_save_file != None: model_save[classifier] = model if is_binary_class: precision = metrics.precision_score(test_y, predict) recall = metrics.recall_score(test_y, predict) print 'precision: %.2f%%, recall: %.2f%%' % (100 * precision, 100 * recall) accuracy = metrics.accuracy_score(test_y, predict) print 'accuracy: %.2f%%' % (100 * accuracy) if model_save_file != None: pickle.dump(model_save, open(model_save_file, 'wb'))

四、測(cè)試結(jié)果

       本次使用mnist手寫(xiě)體庫(kù)進(jìn)行實(shí)驗(yàn):。共5萬(wàn)訓(xùn)練樣本和1萬(wàn)測(cè)試樣本。

       代碼運(yùn)行結(jié)果如下:

reading training and testing data... ******************** Data Info ********************* #training data: 50000, #testing_data: 10000, dimension: 784 ******************* NB ******************** training took 0.287000s! accuracy: 83.69% ******************* KNN ******************** training took 31.991000s! accuracy: 96.64% ******************* LR ******************** training took 101.282000s! accuracy: 91.99% ******************* RF ******************** training took 5.442000s! accuracy: 93.78% ******************* DT ******************** training took 28.326000s! accuracy: 87.23% ******************* SVM ******************** training took 3152.369000s! accuracy: 94.35% ******************* GBDT ******************** training took 7623.761000s! accuracy: 96.18%

       在這個(gè)數(shù)據(jù)集中,由于數(shù)據(jù)分布的團(tuán)簇性較好(如果對(duì)這個(gè)數(shù)據(jù)庫(kù)了解的話,看它的t-SNE映射圖就可以看出來(lái)。由于任務(wù)簡(jiǎn)單,其在deep learning界已被認(rèn)為是toy dataset),因此KNN的效果不賴。GBDT是個(gè)非常不錯(cuò)的算法,,在kaggle等大數(shù)據(jù)比賽中,狀元探花榜眼之列經(jīng)常能見(jiàn)其身影。三個(gè)臭皮匠賽過(guò)諸葛亮,還是被驗(yàn)證有道理的,特別是三個(gè)臭皮匠還能力互補(bǔ)的時(shí)候!

       還有一個(gè)在實(shí)際中非常有效的方法,就是融合這些分類器,再進(jìn)行決策。例如簡(jiǎn)單的投票,效果都非常不錯(cuò)。建議在實(shí)踐中,大家都可以嘗試下。

 


  本文關(guān)鍵詞:scikit-learn,由筆耕文化傳播整理發(fā)布。



本文編號(hào):116388

資料下載
論文發(fā)表

本文鏈接:http://www.sikaile.net/wenshubaike/kaixinbaike/116388.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶ecaf2***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com