Cross Validation : Repeated process of spliting validation set and evaluating model.
Train set : Validation set : Test set = 6 : 2 : 2 (generally)
Test sets are not used in the model learning process.
In Kagge competition, test sets are given separately.
purpose : To make a good model
A good model doesn’t mean high-performance model.
A good model means low-error and stable model.
Because it takes a long time, it is useful when there is not much data.
Prepare data 1 2 3 4 import pandas as pdwine = pd.read_csv('https://bit.ly/wine_csv_data' ) data = wine[['alcohol' ,'sugar' ,'pH' ]].to_numpy() target = wine[['class' ]].to_numpy()
1 2 3 4 5 6 7 from sklearn.model_selection import train_test_splittrain_input, test_input, train_target, test_target = train_test_split( data, target, test_size=0.2 , random_state=42 ) sub_input, val_input, sub_target, val_target = train_test_split( train_input, train_target, test_size=0.2 , random_state=42 )
1 sub_input.shape, val_input.shape, test_input.shape
((4157, 3), (1040, 3), (1300, 3))
Create model 1 2 3 4 5 from sklearn.tree import DecisionTreeClassifierdt = DecisionTreeClassifier(random_state=42 ) dt.fit(sub_input, sub_target) print (dt.score(sub_input, sub_target))print (dt.score(val_input, val_target))
0.9971133028626413
0.864423076923077
Validate model 1 2 3 4 from sklearn.model_selection import cross_validatescores = cross_validate(dt, train_input, train_target) for item in scores.items(): print (item)
('fit_time', array([0.01251197, 0.00755358, 0.0074594 , 0.00742102, 0.00734329]))
('score_time', array([0.00133634, 0.00079608, 0.0007925 , 0.00083232, 0.00076413]))
('test_score', array([0.86923077, 0.84615385, 0.87680462, 0.84889317, 0.83541867]))
1 2 import numpy as npprint (np.mean(scores['test_score' ]))
0.855300214703487
In cross-validation, a splitter must be specified to mix training sets.
Regression model > KFold
Classification model > StratifiedKFold
1 2 3 4 from sklearn.model_selection import StratifiedKFoldsplitter = StratifiedKFold(shuffle=True , random_state=42 ) scores = cross_validate(dt, train_input, train_target, cv=splitter) print (np.mean(scores['test_score' ]))
0.8539548012141852
1 2 3 splitter = StratifiedKFold(n_splits=10 , shuffle=True , random_state=42 ) scores = cross_validate(dt, train_input, train_target, cv=splitter) print (np.mean(scores['test_score' ]))
0.8574181117533719
Hyperparameter Tuning
ex) max_depth=3, accuracy=0.84
Finding the best value by adjusting multiple parameters simultaneously.
AutoML : technology that automatically performs hyperparameter tuning without intervention of person.
Grid Search, Random Search
Grid Search
Perform hyperparameter tuning and cross-validation simultaneously
Find the optimal hyperparameters based on all combinations of predetermined values.
1 2 3 4 5 6 7 8 %%time from sklearn.model_selection import GridSearchCVparams = { 'min_impurity_decrease' : [0.0001 , 0.0002 , 0.0003 , 0.0004 , 0.0005 ] } gs = GridSearchCV(DecisionTreeClassifier(random_state=42 ), params, n_jobs=-1 ) gs.fit(train_input, train_target)
CPU times: user 70.1 ms, sys: 6.06 ms, total: 76.1 ms
Wall time: 183 ms
1 2 3 dt = gs.best_estimator_ print (dt)print (dt.score(train_input, train_target))
DecisionTreeClassifier(min_impurity_decrease=0.0001, random_state=42)
0.9615162593804117
1 2 print (gs.cv_results_['mean_test_score' ])print (gs.best_params_)
[0.86819297 0.86453617 0.86492226 0.86780891 0.86761605]
{'min_impurity_decrease': 0.0001}
1 2 3 4 5 6 7 8 9 10 11 12 %%time from sklearn.model_selection import GridSearchCVparams = { 'min_impurity_decrease' : [0.0001 , 0.0002 , 0.0003 , 0.0004 , 0.0005 ], 'max_depth' : [3 , 4 , 5 , 6 , 7 ] } gs = GridSearchCV(DecisionTreeClassifier(random_state=42 ), params, n_jobs=-1 ) gs.fit(train_input, train_target)
CPU times: user 167 ms, sys: 4.85 ms, total: 172 ms
Wall time: 585 ms
1 2 3 dt = gs.best_estimator_ print (dt)print (dt.score(train_input, train_target))
DecisionTreeClassifier(max_depth=7, min_impurity_decrease=0.0005,
random_state=42)
0.8830094285164518
1 2 print (gs.cv_results_['mean_test_score' ]) print (gs.best_params_)
[0.84125583 0.84125583 0.84125583 0.84125583 0.84125583 0.85337806
0.85337806 0.85337806 0.85337806 0.85318557 0.85780355 0.85799604
0.85857352 0.85857352 0.85838102 0.85645721 0.85799678 0.85876675
0.85972866 0.86088306 0.85607093 0.85761031 0.85799511 0.85991893
0.86280466]
{'max_depth': 7, 'min_impurity_decrease': 0.0005}
The optimal value of ‘min_impurity_decrease’ varies when the value of ‘max_depth’ changes.
Random Search
Find the optimal hyperparameters based on possible combinations within a predetermined range of values.
Delivers probability distribution objects that can sample parameters.
1 2 3 4 5 6 7 from scipy.stats import uniform, randintparams = { 'min_impurity_decrease' : uniform(0.0001 , 0.001 ), 'max_depth' : randint(20 , 50 ) }
1 2 3 4 5 6 %%time from sklearn.model_selection import RandomizedSearchCVgs = RandomizedSearchCV(DecisionTreeClassifier(random_state=42 ), params, n_iter=100 , n_jobs=-1 , random_state=42 ) gs.fit(train_input, train_target)
CPU times: user 629 ms, sys: 15.8 ms, total: 645 ms
Wall time: 2.54 s
1 2 3 dt = gs.best_estimator_ print (dt)print (dt.score(train_input, train_target))
DecisionTreeClassifier(max_depth=29, min_impurity_decrease=0.000437615171403628,
random_state=42)
0.8903213392341736
{'max_depth': 29, 'min_impurity_decrease': 0.000437615171403628}
Ref.) 혼자 공부하는 머신러닝+딥러닝 (박해선, 한빛미디어)