正文共1579个字,7张图,预计阅读时间15分钟。
训练和测试数据集的分布
在开始竞赛之前,我们要检查测试数据集的分布与训练数据集的分布,如果可能的话,看看它们之间有多么不同。这对模型的进一步处理有很大帮助.
首先导入必须的库文件
1import gc
2import itertools
3from copy import deepcopy
4import numpy as np
5import pandas as pd
6from tqdm import tqdm
7from scipy.stats import ks_2samp
8from sklearn.preprocessing import scale, MinMaxScaler
9from sklearn.manifold import TSNE
10from sklearn.decomposition import TruncatedSVD
11from sklearn.decomposition import FastICA
12from sklearn.random_projection import GaussianRandomProjection
13from sklearn.random_projection import SparseRandomProjection
14from sklearn import manifold
15from sklearn.ensemble import ExtraTreesClassifier
16from sklearn.model_selection import cross_val_predict
17from sklearn.metrics import classification_report
18from sklearn.model_selection import StratifiedKFold
19import matplotlib.pyplot as plt
20from matplotlib.ticker import NullFormatter
21%matplotlib inline
1.t-SNE分布概述
首先,我将从训练数据集和测试数据集中取出等量的样本(来自两者的4459个样本,即整个训练集和测试集的样本),并对组合数据执行t-SNE。 我用均值方差缩放所有数据,但对于我们有异常值(> 3x标准差)的列,我也在缩放之前进行对数变换。
1.0 数据预处理
目前的预处理程序:
1def combined_data(train, test):
2"""
3Get the combined data
4:param train pandas.dataframe:
5:param test pandas.dataframe:
6:return pandas.dataframe:
7"""
8A = set(train.columns.values)
9B = set(test.columns.values)
10colToDel = A.difference(B)
11total_df = pd.concat([train.drop(colToDel, axis=1), test], axis=0)
12return total_df
删除重复项目
1def remove_duplicate_columns(total_df):
2"""
3Removing duplicate columns
4"""
5colsToRemove = []
6columns = total_df.columns
7for i in range(len(columns) - 1):
8v = total_df[columns[i]].values
9for j in range(i + 1, len(columns)):
10 if np.array_equal(v, total_df[columns[j]].values):
11 colsToRemove.append(columns[j])
12colsToRemove = list(set(colsToRemove))
13total_df.drop(colsToRemove, axis=1, inplace=True)
14print(f">> Dropped {len(colsToRemove)} duplicate columns")
15return total_df
处理极值
1def log_significant_outliers(total_df):
2"""
3frist master fill na
4Log-transform all columns which have significant outliers (> 3x standard deviation)
5:return pandas.dataframe:
6"""
7total_df_all = deepcopy(total_df).select_dtypes(include=[np.number])
8total_df_all.fillna(0, inplace=True) # ********
9for col in total_df_all.columns:
10# print(col)
11data = total_df_all[col].values
12data_mean, data_std = np.mean(data), np.std(data)
13cut_off = data_std * 3
14lower, upper = data_mean - cut_off, data_mean + cut_off
15outliers = [x for x in data if x < lower or x > upper]
16
17if len(outliers) > 0:
18 non_zero_index = data != 0
19 total_df_all.loc[non_zero_index, col] = np.log(data[non_zero_index])
20
21non_zero_rows = total_df[col] != 0
22total_df_all.loc[non_zero_rows, col] = scale(total_df_all.loc[non_zero_rows, col])
23gc.collect()
24
25return total_df_all
这步之后我们得到两个不同的数据集合,在极值处理上略有不同
1.1 执行PCA
由于有很多特征值,我认为在t-SNE之前执行PCA以减少维度是个好主意。 任意地,我选择包含1000个PCA组件,其中包括数据集中大约80%的变化,我认为这可以说明分布,但也加快了t-SNE。 在下面的内容中,我只展示了数据集中PCA上的绘图。
1def test_pca(data, train_idx, test_idx, create_plots=True):
2"""
3data, panda.DataFrame
4train_idx = range(0, len(train_df))
5test_idx = range(len(train_df), len(total_df))
6Run PCA analysis, return embeding
7"""
8data = data.select_dtypes(include=[np.number])
9data = data.fillna(0)
10# Create a PCA object, specifying how many components we wish to keep
11pca = PCA(n_components=len(data.columns))
12
13# Run PCA on scaled numeric dataframe, and retrieve the projected data
14pca_trafo = pca.fit_transform(data)
15
16# The transformed data is in a numpy matrix. This may be inconvenient if we want to further
17# process the data, and have a more visual impression of what each column is etc. We therefore
18# put transformed/projected data into new dataframe, where we specify column names and index
19pca_df = pd.DataFrame(
20pca_trafo,
21index=data.index,
22columns=['PC' + str(i + 1) for i in range(pca_trafo.shape[1])]
23)
24
25if create_plots:
26# Create two plots next to each other
27_, axes = plt.subplots(2, 2, figsize=(20, 15))
28axes = list(itertools.chain.from_iterable(axes))
29
30# Plot the explained variance# Plot t
31axes[0].plot(
32 pca.explained_variance_ratio_, "--o", linewidth=2,
33 label="Explained variance ratio"
34)
35
36# Plot the explained variance# Plot t
37axes[0].plot(
38 pca.explained_variance_ratio_.cumsum(), "--o", linewidth=2,
39 label="Cumulative explained variance ratio"
40)
41
42# show legend
43axes[0].legend(loc='best', frameon=True)
44
45# show biplots
46for i in range(1, 4):
47 # Components to be plottet
48 x, y = "PC" + str(i), "PC" + str(i + 1)
49
50 # plot biplots
51 settings = {'kind': 'scatter', 'ax': axes[i], 'alpha': 0.2, 'x': x, 'y': y}
52
53 pca_df.iloc[train_idx].plot(label='Train', c='#ff7f0e', **settings)
54 pca_df.iloc[test_idx].plot(label='Test', c='#1f77b4', **settings)
55return pca_df
56train_idx = range(0, len(train_df))
57test_idx = range(len(train_df), len(total_df))
58
59pca_df = test_pca(total_df, train_idx, test_idx)
60pca_df_all = test_pca(total_df_all, train_idx, test_idx)
61print(">> PCA : (only for np.number)", pca_df.shape, pca_df_all.shape)
看起来很有趣,训练数据比在测试数据中更加分散,测试数据似乎更紧密地聚集在中心周围。
1.2 运行t-SNE
稍微降低了维度,现在可以在大约5分钟内运行t-SNE,然后在嵌入的2D空间中绘制训练和测试数据。 在下文中,将看到任何差异的数据集案例执行此操作。
1def test_tsne(data, ax=None, title='t-SNE'):
2"""Run t-SNE and return embedding"""
3
4# Run t-SNE
5tsne = TSNE(n_components=2, init='pca')
6Y = tsne.fit_transform(data)
7
8# Create plot
9for name, idx in zip(["Train", "Test"], [train_idx, test_idx]):
10ax.scatter(Y[idx, 0], Y[idx, 1], label=name, alpha=0.2)
11ax.set_title(title)
12ax.xaxis.set_major_formatter(NullFormatter())
13ax.yaxis.set_major_formatter(NullFormatter())
14ax.legend()
15return Y
16
17# Run t-SNE on PCA embedding
18_, axes = plt.subplots(1, 2, figsize=(20, 8))
19
20tsne_df = test_tsne(
21pca_df, axes[0],
22title='t-SNE: Scaling on non-zeros'
23)
24tsne_df_unique = test_tsne(
25pca_df_all, axes[1],
26title='t-SNE: Scaling on all entries'
27)
28
29plt.axis('tight')
30plt.show()
从这看来,如果仅对非零条目执行缩放,则训练和测试集看起来更相似。 如果对所有条目执行缩放,则两个数据集似乎彼此更加分离。 在以前的笔记本中,我没有删除零标准偏差的重复列或列 - 在这种情况下,观察到更显着的差异。 当然,根据我的经验,谨慎对待t-SNE的解释,可能值得更详细地研究一下; 无论是在t-SNE参数,预处理等方面。
1.2.1 t-SNE由行索引或零计数着色
看起来很有趣 - 似乎较高索引的行位于图的中心。 此外,我们看到一小部分行,几乎没有零条目,右侧图中还有一些集群。
1.2.2 t-SNE的不同参数
根据不同参数,t-SNE可以给出一些不同的结果。 所以为了确保在下面我检查了一些不同的perplexity参数值。
1_, axes = plt.subplots(1, 4, figsize=(20, 5))
2for i, perplexity in enumerate([5, 30, 50, 100]):
3
4# Create projection
5Y = TSNE(init='pca', perplexity=perplexity).fit_transform(pca_df)
6
7# Plot t-SNE
8for name, idx in zip(["Train", "Test"], [train_idx, test_idx]):
9axes[i].scatter(Y[idx, 0], Y[idx, 1], label=name, alpha=0.2)
10axes[i].set_title("Perplexity=%d" % perplexity)
11axes[i].xaxis.set_major_formatter(NullFormatter())
12axes[i].yaxis.set_major_formatter(NullFormatter())
13axes[i].legend()
14plt.show()
2.Test vs.Train
另一个好的方法是看我们如何分类给定条目是否属于测试或训练数据集 - 如果可以合理地做到这一点,那就是两个数据集分布之间差异的指示。 我将使用基本的随机森林模型进行简单的混合10倍交叉验证,看看它执行此任务的效果如何。 首先让我们尝试对所有条目执行缩放的情况进行分类:
1def test_prediction(data):
2"""Try to classify train/test samples from total dataframe"""
3# Create a target which is 1 for training rows, 0 for test rows
4y = np.zeros(len(data))
5y[train_idx] = 1
6
7# Perform shuffled CV predictions of train/test label
8predictions = cross_val_predict(
9 ExtraTreesClassifier(n_estimators=100, n_jobs=4),
10data, y,
11cv=StratifiedKFold(
12 n_splits=10,
13 shuffle=True,
14 random_state=42
15)
16)
17
18# Show the classification report
19print(classification_report(y, predictions))
20
21# Run classification on total raw data
22test_prediction(total_df_all)
在目前的数据上,这给出了大约0.71 f1的分数,这意味着我们可以很好地做到这一预测,表明数据集之间存在一些显着差异。 让我们试试我们只缩放非零值的数据集:
1>> Prediction Train or Test
2 precision recall f1-score support
3
40.0 0.86 0.46 0.60 4459
51.0 0.63 0.92 0.75 4459
6
7avg / total 0.75 0.69 0.68 8918
#3 每个特征的分布相似性
接下来让我们尝试逐个特征地查看问题,并执行Kolomogorov-Smirnov测试以查看测试和训练集中的分布是否相似。 我将从scipy使用函数来运行 测试。 对于分布高度可区分的所有特征,我们可以从忽略这些列中受益,以避免过度拟合训练数据。 在下文中,我只是识别这些列,并将分布绘制为一些功能的完整性检查
1def get_diff_columns(train_df, test_df, show_plots=True, show_all=False, threshold=0.1):
2"""Use KS to estimate columns where distributions differ a lot from each other"""
3
4# Find the columns where the distributions are very different
5diff_data = []
6for col in tqdm(train_df.columns):
7statistic, pvalue = ks_2samp(
8 train_df[col].values,
9 test_df[col].values
10)
11if pvalue <= 0.05 and np.abs(statistic) > threshold:
12 diff_data.append({'feature': col, 'p': np.round(pvalue, 5), 'statistic': np.round(np.abs(statistic), 2)})
13
14# Put the differences into a dataframe
15diff_df = pd.DataFrame(diff_data).sort_values(by='statistic', ascending=False)
16
17if show_plots:
18# Let us see the distributions of these columns to confirm they are indeed different
19n_cols = 7
20if show_all:
21 n_rows = int(len(diff_df) / 7)
22else:
23 n_rows = 2
24_, axes = plt.subplots(n_rows, n_cols, figsize=(20, 3*n_rows))
25axes = [x for l in axes for x in l]
26
27# Create plots
28for i, (_, row) in enumerate(diff_df.iterrows()):
29 if i >= len(axes):
30 break
31 extreme = np.max(np.abs(train_df[row.feature].tolist() + test_df[row.feature].tolist()))
32 train_df.loc[:, row.feature].apply(np.log1p).hist(
33 ax=axes[i], alpha=0.5, label='Train', density=True,
34 bins=np.arange(-extreme, extreme, 0.25)
35 )
36 test_df.loc[:, row.feature].apply(np.log1p).hist(
37 ax=axes[i], alpha=0.5, label='Test', density=True,
38 bins=np.arange(-extreme, extreme, 0.25)
39 )
40 axes[i].set_title(f"Statistic = {row.statistic}, p = {row.p}")
41 axes[i].set_xlabel(f'Log({row.feature})')
42 axes[i].legend()
43
44plt.tight_layout()
45plt.show()
46
47return diff_df
48
49# Get the columns which differ a lot between test and train
50diff_df = get_diff_columns(total_df.iloc[train_idx], total_df.iloc[test_idx])
1>> Dropping 22 features based on KS tests
2 precision recall f1-score support
3
40.0 0.85 0.45 0.59 4459
51.0 0.63 0.92 0.75 4459
6
7avg / total 0.74 0.68 0.67 8918
#4 分解特征
到目前为止,我只看了PCA组件,但是大多数内核都考虑了几种分解方法,所以看一下每种方法的10-50个组件的t-SNE而不是1000个PCA组件可能会很有趣。 此外,有趣的是我们可以根据这个缩小的特征空间对测试/训练进行分类。
1COMPONENTS = 20
2
3# List of decomposition methods to use
4methods = [
5TruncatedSVD(n_components=COMPONENTS),
6PCA(n_components=COMPONENTS),
7FastICA(n_components=COMPONENTS),
8GaussianRandomProjection(n_components=COMPONENTS, eps=0.1),
9SparseRandomProjection(n_components=COMPONENTS, dense_output=True)
10# Run all the methods
11embeddings = []
12for method in methods:
13name = method.__class__.__name__
14embeddings.append(
15pd.DataFrame(method.fit_transform(total_df), columns=[f"{name}_{i}" for i in range(COMPONENTS)])
16)
17print(f">> Ran {name}")
18
19# Put all components into one dataframe
20components_df = pd.concat(embeddings, axis=1)
21
22# Prepare plot
23, axes = plt.subplots(1, 3, figsize=(20, 5))
2425# Run t-SNE on components
26tsne_df = test_tsne(
27components_df, axes[0],
28title='t-SNE: with decomposition features'
29)
30
31# Color by index
32sc = axes[1].scatter(tsne_df[:, 0], tsne_df[:, 1], alpha=0.2, c=range(len(tsne_df)), cmap=cm)
33cbar = fig.colorbar(sc, ax=axes[1])
34cbar.set_label('Entry index')
35axes[1].set_title("t-SNE colored by index")
36axes[1].xaxis.set_major_formatter(NullFormatter())
37axes[1].yaxis.set_major_formatter(NullFormatter())
38
39# Color by target
40sc = axes[2].scatter(tsne_df[train_idx, 0], tsne_df[train_idx, 1], alpha=0.2, c=np.log1p(train_df.target), cmap=cm)
41cbar = fig.colorbar(sc, ax=axes[2])
42cbar.set_label('Log1p(target)')
43axes[2].set_title("t-SNE colored by target")
44axes[2].xaxis.set_major_formatter(NullFormatter())
45axes[2].yaxis.set_major_formatter(NullFormatter())
46plt.axis('tight')
47plt.show()
测试数据集和训练数据集合分布相似了。
原文链接:https://www.jianshu.com/p/464faf4953c4