
作者:HOS(安全风信子) 日期:2026-01-09 来源平台:GitHub 摘要: 正则化(Regularization)是机器学习中控制模型复杂度、防止过拟合的核心技术。在安全攻防场景下,正则化的作用不仅是防止过拟合,更重要的是通过惩罚特定模型行为,增强模型的鲁棒性和安全性。本文从数学本质和工程实践角度深入解析正则化的惩罚机制,包括L1、L2、Dropout等常见方法在安全场景下的特殊意义。结合最新GitHub开源项目和安全实践,提供3个完整代码示例、2个Mermaid架构图和2个对比表格,系统阐述安全复杂度控制中的正则化策略。文章将帮助安全工程师理解正则化如何影响模型的安全性,掌握在攻防环境中选择和应用合适正则化方法的实践指南。
正则化是通过在损失函数中添加额外项来限制模型复杂度的技术,传统上被用于防止过拟合。常见的正则化方法包括:
传统观点认为,正则化通过惩罚模型复杂度来提高泛化能力,但在安全场景下,这种理解过于简化。
在安全攻防场景下,正则化面临以下特殊需求:
根据GitHub上的最新项目和arXiv研究论文,安全领域的正则化研究呈现以下热点:
正则化的核心是在损失函数中添加惩罚项,数学形式为:
L(θ) = L_train(θ) + λ * R(θ)其中:
L_train(θ):训练损失,衡量模型对训练数据的拟合程度λ:正则化强度,控制惩罚项的权重R(θ):正则化项,衡量模型复杂度不同正则化方法的区别在于R(θ)的形式:
正则化方法 | R(θ)形式 | 数学表达式 | 惩罚特点 |
|---|---|---|---|
L1正则化 | L1范数 | Σ | θ_i |
L2正则化 | L2范数 | Σθ_i² | 鼓励参数值接近0,所有参数都不为0 |
Elastic Net | L1+L2范数 | αΣ | θ_i |
Dropout | 隐式正则化 | 无显式表达式 | 鼓励神经元之间的独立作用 |
在安全攻防场景下,正则化的惩罚机制具有以下特殊意义:
从安全角度看,正则化是一种复杂度控制机制,通过惩罚以下行为来提高模型安全性:
Mermaid流程图:

Mermaid架构图:
渲染错误: Mermaid 渲染失败: Parse error on line 70: ...监控模块 style 安全正则化系统 fill:#FF4500 ---------------------^ Expecting 'ALPHA', got 'UNICODE_TEXT'
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression, RidgeClassifier, Lasso
from sklearn.metrics import f1_score, recall_score, precision_score
# 生成安全相关的不平衡分类数据
X, y = make_classification(n_samples=1000, n_features=30, n_informative=10,
n_redundant=10, n_classes=2, weights=[0.95, 0.05],
random_state=42)
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 定义不同正则化方法的模型
models = {
"无正则化": LogisticRegression(penalty='none', solver='lbfgs', random_state=42, max_iter=1000),
"L1正则化": LogisticRegression(penalty='l1', solver='liblinear', C=0.1, random_state=42),
"L2正则化": LogisticRegression(penalty='l2', solver='lbfgs', C=0.1, random_state=42, max_iter=1000),
"Elastic Net": LogisticRegression(penalty='elasticnet', solver='saga', l1_ratio=0.5, C=0.1, random_state=42, max_iter=1000)
}
# 训练模型并评估
results = {}
for model_name, model in models.items():
# 训练模型
model.fit(X_train, y_train)
# 在训练集上预测
y_train_pred = model.predict(X_train)
# 在测试集上预测
y_test_pred = model.predict(X_test)
# 计算安全相关指标
f1_train = f1_score(y_train, y_train_pred)
f1_test = f1_score(y_test, y_test_pred)
recall_train = recall_score(y_train, y_train_pred)
recall_test = recall_score(y_test, y_test_pred)
precision_train = precision_score(y_train, y_train_pred)
precision_test = precision_score(y_test, y_test_pred)
# 计算泛化误差
generalization_error = f1_train - f1_test
# 计算模型复杂度(非零参数数量)
if hasattr(model, 'coef_'):
non_zero_params = np.sum(model.coef_ != 0)
else:
non_zero_params = "N/A"
# 存储结果
results[model_name] = {
"f1_train": f1_train,
"f1_test": f1_test,
"recall_train": recall_train,
"recall_test": recall_test,
"precision_train": precision_train,
"precision_test": precision_test,
"generalization_error": generalization_error,
"non_zero_params": non_zero_params
}
# 可视化结果
fig, axes = plt.subplots(2, 2, figsize=(16, 12))
# 绘制F1分数对比
model_names = list(results.keys())
f1_train_scores = [results[model]["f1_train"] for model in model_names]
f1_test_scores = [results[model]["f1_test"] for model in model_names]
axes[0, 0].bar(model_names, f1_train_scores, alpha=0.5, label='Train F1', color='#32CD32')
axes[0, 0].bar(model_names, f1_test_scores, alpha=0.5, label='Test F1', color='#4169E1')
axes[0, 0].set_title('F1 Score Comparison')
axes[0, 0].set_ylabel('F1 Score')
axes[0, 0].tick_params(axis='x', rotation=45)
axes[0, 0].legend()
# 绘制召回率对比
recall_train_scores = [results[model]["recall_train"] for model in model_names]
recall_test_scores = [results[model]["recall_test"] for model in model_names]
axes[0, 1].bar(model_names, recall_train_scores, alpha=0.5, label='Train Recall', color='#32CD32')
axes[0, 1].bar(model_names, recall_test_scores, alpha=0.5, label='Test Recall', color='#4169E1')
axes[0, 1].set_title('Recall Comparison')
axes[0, 1].set_ylabel('Recall')
axes[0, 1].tick_params(axis='x', rotation=45)
axes[0, 1].legend()
# 绘制泛化误差
generalization_errors = [results[model]["generalization_error"] for model in model_names]
axes[1, 0].bar(model_names, generalization_errors, color='#FF4500')
axes[1, 0].set_title('Generalization Error (Train F1 - Test F1)')
axes[1, 0].set_ylabel('Generalization Error')
axes[1, 0].tick_params(axis='x', rotation=45)
# 绘制非零参数数量(模型复杂度)
non_zero_params = [results[model]["non_zero_params"] for model in model_names]
axes[1, 1].bar(model_names, non_zero_params, color='#DA70D6')
axes[1, 1].set_title('Number of Non-Zero Parameters (Model Complexity)')
axes[1, 1].set_ylabel('Number of Non-Zero Parameters')
axes[1, 1].tick_params(axis='x', rotation=45)
plt.tight_layout()
plt.savefig('regularization_comparison.png')
print("正则化方法对比可视化完成,保存为regularization_comparison.png")
# 打印详细结果
print("\n正则化方法对比结果:")
print("-" * 95)
print(f"{'模型名称':<15} {'训练集F1':<12} {'测试集F1':<12} {'泛化误差':<12} {'测试集召回率':<15} {'测试集精确率':<15} {'非零参数数量':<15}")
print("-" * 95)
for model_name, metrics in results.items():
print(f"{model_name:<15} {metrics['f1_train']:<12.4f} {metrics['f1_test']:<12.4f} {metrics['generalization_error']:<12.4f} "
f"{metrics['recall_test']:<15.4f} {metrics['precision_test']:<15.4f} {metrics['non_zero_params']:<15}")import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, recall_score, precision_score
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping
# 生成安全相关的不平衡分类数据
X, y = make_classification(n_samples=2000, n_features=20, n_informative=10,
n_redundant=5, n_classes=2, weights=[0.95, 0.05],
random_state=42)
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 构建带不同Dropout率的神经网络
def build_model(dropout_rate=0.0):
model = Sequential([
Dense(64, activation='relu', input_shape=(X.shape[1],)),
Dropout(dropout_rate),
Dense(32, activation='relu'),
Dropout(dropout_rate),
Dense(1, activation='sigmoid')
])
model.compile(optimizer=Adam(learning_rate=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
return model
# 定义不同Dropout率的模型
dropout_rates = [0.0, 0.2, 0.4, 0.6]
models = {}
for dropout_rate in dropout_rates:
model_name = f"Dropout {dropout_rate}"
models[model_name] = build_model(dropout_rate)
# 训练模型并评估
dropout_results = {}
early_stopping = EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)
for model_name, model in models.items():
print(f"训练 {model_name}...")
# 训练模型
history = model.fit(X_train, y_train,
epochs=50,
batch_size=32,
validation_split=0.2,
callbacks=[early_stopping],
verbose=0)
# 在训练集上预测
y_train_pred_prob = model.predict(X_train)
y_train_pred = (y_train_pred_prob > 0.5).astype(int).flatten()
# 在测试集上预测
y_test_pred_prob = model.predict(X_test)
y_test_pred = (y_test_pred_prob > 0.5).astype(int).flatten()
# 计算安全相关指标
f1_train = f1_score(y_train, y_train_pred)
f1_test = f1_score(y_test, y_test_pred)
recall_train = recall_score(y_train, y_train_pred)
recall_test = recall_score(y_test, y_test_pred)
precision_train = precision_score(y_train, y_train_pred)
precision_test = precision_score(y_test, y_test_pred)
# 计算泛化误差
generalization_error = f1_train - f1_test
# 存储结果
dropout_results[model_name] = {
"f1_train": f1_train,
"f1_test": f1_test,
"recall_train": recall_train,
"recall_test": recall_test,
"precision_train": precision_train,
"precision_test": precision_test,
"generalization_error": generalization_error,
"history": history
}
# 可视化结果
fig, axes = plt.subplots(2, 2, figsize=(16, 12))
# 绘制F1分数对比
model_names = list(dropout_results.keys())
f1_train_scores = [dropout_results[model]["f1_train"] for model in model_names]
f1_test_scores = [dropout_results[model]["f1_test"] for model in model_names]
axes[0, 0].bar(model_names, f1_train_scores, alpha=0.5, label='Train F1', color='#32CD32')
axes[0, 0].bar(model_names, f1_test_scores, alpha=0.5, label='Test F1', color='#4169E1')
axes[0, 0].set_title('F1 Score Comparison with Different Dropout Rates')
axes[0, 0].set_ylabel('F1 Score')
axes[0, 0].tick_params(axis='x', rotation=45)
axes[0, 0].legend()
# 绘制召回率对比
recall_train_scores = [dropout_results[model]["recall_train"] for model in model_names]
recall_test_scores = [dropout_results[model]["recall_test"] for model in model_names]
axes[0, 1].bar(model_names, recall_train_scores, alpha=0.5, label='Train Recall', color='#32CD32')
axes[0, 1].bar(model_names, recall_test_scores, alpha=0.5, label='Test Recall', color='#4169E1')
axes[0, 1].set_title('Recall Comparison with Different Dropout Rates')
axes[0, 1].set_ylabel('Recall')
axes[0, 1].tick_params(axis='x', rotation=45)
axes[0, 1].legend()
# 绘制泛化误差
generalization_errors = [dropout_results[model]["generalization_error"] for model in model_names]
axes[1, 0].bar(model_names, generalization_errors, color='#FF4500')
axes[1, 0].set_title('Generalization Error with Different Dropout Rates')
axes[1, 0].set_ylabel('Generalization Error')
axes[1, 0].tick_params(axis='x', rotation=45)
# 绘制训练历史
for model_name, result in dropout_results.items():
axes[1, 1].plot(result["history"].history['loss'], label=f'{model_name} - 训练损失')
axes[1, 1].plot(result["history"].history['val_loss'], label=f'{model_name} - 验证损失')
axes[1, 1].set_title('Training and Validation Loss with Different Dropout Rates')
axes[1, 1].set_xlabel('Epochs')
axes[1, 1].set_ylabel('Loss')
axes[1, 1].legend()
axes[1, 1].tick_params(axis='x', rotation=0)
plt.tight_layout()
plt.savefig('dropout_comparison.png')
print("Dropout率对比可视化完成,保存为dropout_comparison.png")
# 打印详细结果
print("\nDropout率对比结果:")
print("-" * 95)
print(f"{'模型名称':<15} {'训练集F1':<12} {'测试集F1':<12} {'泛化误差':<12} {'测试集召回率':<15} {'测试集精确率':<15}")
print("-" * 95)
for model_name, metrics in dropout_results.items():
print(f"{model_name:<15} {metrics['f1_train']:<12.4f} {metrics['f1_test']:<12.4f} {metrics['generalization_error']:<12.4f} "
f"{metrics['recall_test']:<15.4f} {metrics['precision_test']:<15.4f}")import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score, recall_score, precision_score
# 生成安全相关的分类数据
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10,
n_redundant=5, n_classes=2, weights=[0.9, 0.1],
random_state=42)
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 训练不同正则化强度的模型
regularization_strengths = [0.01, 0.1, 1.0, 10.0]
models = {}
for c in regularization_strengths:
model_name = f"L2正则化 (C={c})"
models[model_name] = LogisticRegression(penalty='l2', solver='lbfgs', C=c, random_state=42, max_iter=1000)
models[model_name].fit(X_train, y_train)
# 生成对抗样本(FGSM方法)
def generate_fgsm_adversarial(model, X, y, epsilon=0.1):
"""生成FGSM对抗样本"""
if not hasattr(model, 'decision_function'):
raise ValueError("模型不支持decision_function方法")
X_adversarial = X.copy()
# 对每个样本生成对抗样本
for i in range(len(X)):
x = X[i:i+1]
y_true = y[i:i+1]
# 获取模型对样本的预测分数
pred_score = model.decision_function(x)
# 计算梯度方向(简化版,直接使用符号)
if y_true == 1:
# 少数类,我们希望降低模型对其的预测分数
gradient_sign = 1
else:
# 多数类,我们希望提高模型对其的预测分数
gradient_sign = -1
# 生成对抗样本
x_adversarial = x + epsilon * gradient_sign * np.sign(x)
X_adversarial[i:i+1] = x_adversarial
return X_adversarial
# 测试模型在对抗样本上的表现
adversarial_reg_results = {}
epsilon = 0.1 # 对抗样本扰动强度
for model_name, model in models.items():
# 在原始测试集上评估
y_pred_original = model.predict(X_test)
f1_original = f1_score(y_test, y_pred_original)
recall_original = recall_score(y_test, y_pred_original)
precision_original = precision_score(y_test, y_pred_original)
# 生成对抗样本
try:
X_test_adversarial = generate_fgsm_adversarial(model, X_test, y_test, epsilon=epsilon)
# 在对抗样本上评估
y_pred_adversarial = model.predict(X_test_adversarial)
f1_adversarial = f1_score(y_test, y_pred_adversarial)
recall_adversarial = recall_score(y_test, y_pred_adversarial)
precision_adversarial = precision_score(y_test, y_pred_adversarial)
# 计算性能下降
f1_drop = f1_original - f1_adversarial
recall_drop = recall_original - recall_adversarial
precision_drop = precision_original - precision_adversarial
# 计算模型参数范数(复杂度)
param_norm = np.linalg.norm(model.coef_)
adversarial_reg_results[model_name] = {
"original": {
"f1": f1_original,
"recall": recall_original,
"precision": precision_original
},
"adversarial": {
"f1": f1_adversarial,
"recall": recall_adversarial,
"precision": precision_adversarial
},
"drop": {
"f1": f1_drop,
"recall": recall_drop,
"precision": precision_drop
},
"param_norm": param_norm
}
except Exception as e:
print(f"{model_name}生成对抗样本失败: {e}")
# 可视化结果
fig, axes = plt.subplots(1, 3, figsize=(20, 6))
# 绘制原始性能 vs 对抗性能
model_names = list(adversarial_reg_results.keys())
metrics = ["f1", "recall", "precision"]
metric_labels = {"f1": "F1 Score", "recall": "Recall", "precision": "Precision"}
for i, metric in enumerate(metrics):
original_scores = [adversarial_reg_results[model]["original"][metric] for model in model_names]
adversarial_scores = [adversarial_reg_results[model]["adversarial"][metric] for model in model_names]
# 创建x轴位置
x = np.arange(len(model_names))
width = 0.35
# 绘制柱状图
axes[i].bar(x - width/2, original_scores, width, label='Original', color='#32CD32')
axes[i].bar(x + width/2, adversarial_scores, width, label='Adversarial', color='#FF4500')
axes[i].set_title(f'Original vs Adversarial {metric_labels[metric]}')
axes[i].set_ylabel(metric_labels[metric])
axes[i].set_xticks(x)
axes[i].set_xticklabels(model_names, rotation=45)
axes[i].legend()
axes[i].grid(True, axis='y')
plt.tight_layout()
plt.savefig('regularization_adversarial_impact.png')
print("正则化对对抗样本影响可视化完成,保存为regularization_adversarial_impact.png")
# 绘制性能下降与参数范数的关系
plt.figure(figsize=(12, 6))
param_norms = [adversarial_reg_results[model]["param_norm"] for model in model_names]
f1_drops = [adversarial_reg_results[model]["drop"]["f1"] for model in model_names]
plt.plot(param_norms, f1_drops, 'o-', color='#4169E1', linewidth=2)
plt.title('F1 Score Drop vs Model Parameter Norm')
plt.xlabel('Model Parameter Norm (L2)')
plt.ylabel('F1 Score Drop on Adversarial Samples')
plt.grid(True)
# 添加标签
for i, model_name in enumerate(model_names):
plt.annotate(model_name, (param_norms[i], f1_drops[i]), textcoords="offset points", xytext=(0,10), ha='center')
plt.tight_layout()
plt.savefig('regularization_vs_adversarial_resistance.png')
print("正则化强度与对抗抗性关系可视化完成,保存为regularization_vs_adversarial_resistance.png")
# 打印详细结果
print("\n正则化对对抗样本影响结果:")
print("-" * 105)
print(f"{'模型名称':<25} {'原始F1':<10} {'对抗F1':<10} {'F1下降':<10} {'原始召回率':<12} {'对抗召回率':<12} {'召回率下降':<12} {'参数范数':<12}")
print("-" * 105)
for model_name, results in adversarial_reg_results.items():
print(f"{model_name:<25} {results['original']['f1']:<10.4f} {results['adversarial']['f1']:<10.4f} {results['drop']['f1']:<10.4f} "
f"{results['original']['recall']:<12.4f} {results['adversarial']['recall']:<12.4f} {results['drop']['recall']:<12.4f} {results['param_norm']:<12.4f}")正则化方法 | 惩罚机制 | 安全优势 | 安全劣势 | 计算效率 | 适用场景 | 推荐程度 |
|---|---|---|---|---|---|---|
无正则化 | 无 | 训练速度快 | 易受对抗攻击,过拟合风险高 | 高 | 快速原型开发 | ⭐ |
L1正则化 | 惩罚特征数量 | 产生稀疏解,提高可解释性,减少攻击面 | 可能剔除重要特征 | 中 | 需要可解释性的安全场景 | ⭐⭐⭐⭐ |
L2正则化 | 惩罚特征权重 | 提高对抗鲁棒性,防止参数爆炸 | 所有特征都被保留,模型复杂度较高 | 中 | 大多数安全场景 | ⭐⭐⭐⭐⭐ |
Elastic Net | 惩罚特征数量和权重 | 结合L1和L2的优点 | 调参复杂 | 中 | 需要平衡可解释性和鲁棒性的场景 | ⭐⭐⭐⭐ |
Dropout | 惩罚神经元依赖 | 提高模型鲁棒性,减少过拟合 | 训练时间长,需要调整失活率 | 低 | 深度学习安全模型 | ⭐⭐⭐⭐ |
对抗训练 | 惩罚对抗敏感性 | 显著提高对抗鲁棒性 | 训练时间长,计算成本高 | 极低 | 对抗环境 | ⭐⭐⭐⭐⭐ |
调整策略 | 实现复杂度 | 计算效率 | 效果 | 适用场景 | 推荐程度 |
|---|---|---|---|---|---|
手动调参 | 低 | 高 | 一般 | 简单模型,快速开发 | ⭐⭐⭐ |
网格搜索 | 中 | 中 | 好 | 模型超参数较少时 | ⭐⭐⭐⭐ |
贝叶斯优化 | 中 | 中 | 好 | 样本有限,超参数较多时 | ⭐⭐⭐⭐ |
自动机器学习(AutoML) | 高 | 低 | 优秀 | 复杂模型,大量超参数时 | ⭐⭐⭐⭐ |
动态调整 | 高 | 中 | 优秀 | 动态威胁环境 | ⭐⭐⭐⭐⭐ |
参考链接:
附录(Appendix):
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, f1_score
def train_with_regularization(X_train, y_train, X_val, y_val, penalty='l2', param_grid=None):
"""
使用正则化训练模型
参数:
X_train: 训练特征矩阵
y_train: 训练标签向量
X_val: 验证特征矩阵
y_val: 验证标签向量
penalty: 正则化类型,可选 'l1', 'l2', 'elasticnet', 'none'
param_grid: 超参数网格
返回:
best_model: 最佳模型
best_params: 最佳参数
best_score: 最佳分数
"""
# 默认参数网格
if param_grid is None:
if penalty == 'none':
param_grid = {}
else:
param_grid = {
'C': [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
}
if penalty == 'elasticnet':
param_grid['l1_ratio'] = [0.1, 0.3, 0.5, 0.7, 0.9]
# 创建模型
if penalty == 'none':
model = LogisticRegression(penalty='none', solver='lbfgs', random_state=42, max_iter=1000)
elif penalty == 'l1':
model = LogisticRegression(penalty='l1', solver='liblinear', random_state=42)
elif penalty == 'l2':
model = LogisticRegression(penalty='l2', solver='lbfgs', random_state=42, max_iter=1000)
elif penalty == 'elasticnet':
model = LogisticRegression(penalty='elasticnet', solver='saga', random_state=42, max_iter=1000)
else:
raise ValueError(f"不支持的正则化类型: {penalty}")
# 使用F1分数作为评估指标(适合不平衡数据)
scorer = make_scorer(f1_score)
# 网格搜索
grid_search = GridSearchCV(estimator=model,
param_grid=param_grid,
scoring=scorer,
cv=5,
n_jobs=-1)
# 训练模型
grid_search.fit(X_train, y_train)
# 获取最佳模型
best_model = grid_search.best_estimator_
best_params = grid_search.best_params_
best_score = grid_search.best_score_
return best_model, best_params, best_score
def evaluate_regularization_effect(model, X_test, y_test, X_adversarial=None):
"""
评估正则化对模型性能的影响
参数:
model: 训练好的模型
X_test: 测试特征矩阵
y_test: 测试标签向量
X_adversarial: 对抗样本特征矩阵(可选)
返回:
results: 包含模型性能指标的字典
"""
from sklearn.metrics import f1_score, recall_score, precision_score
# 原始测试集上的性能
y_pred = model.predict(X_test)
results = {
'original': {
'f1': f1_score(y_test, y_pred),
'recall': recall_score(y_test, y_pred),
'precision': precision_score(y_test, y_pred)
}
}
# 计算模型复杂度
if hasattr(model, 'coef_'):
results['model_complexity'] = {
'param_norm': np.linalg.norm(model.coef_),
'non_zero_params': np.sum(model.coef_ != 0)
}
# 对抗样本上的性能(如果提供)
if X_adversarial is not None:
y_pred_adversarial = model.predict(X_adversarial)
results['adversarial'] = {
'f1': f1_score(y_test, y_pred_adversarial),
'recall': recall_score(y_test, y_pred_adversarial),
'precision': precision_score(y_test, y_pred_adversarial)
}
# 计算性能下降
results['performance_drop'] = {
'f1': results['original']['f1'] - results['adversarial']['f1'],
'recall': results['original']['recall'] - results['adversarial']['recall'],
'precision': results['original']['precision'] - results['adversarial']['precision']
}
return results关键词: 正则化, 安全攻防, 模型复杂度, 对抗鲁棒性, 过拟合, L1正则化, L2正则化, Dropout, 可解释性