前往小程序,Get更优阅读体验!
立即前往
发布
社区首页 >专栏 >YOLOv8创新改进专栏介绍

YOLOv8创新改进专栏介绍

原创
作者头像
AI小怪兽
修改2023-11-08 14:56:54
修改2023-11-08 14:56:54
1.2K00
代码可运行
举报
文章被收录于专栏:YOLO大作战YOLO大作战
运行总次数:0
代码可运行

通过YOLOv8的红外弱小目标检测,进行问题点分析并提供魔改方案

1. 红外弱小目标数据集

Single-frame InfraRed Small Target

数据集大小:427张,进行3倍数据增强得到1708张,最终训练集验证集测试集随机分配为8:1:1

1.1 数据集划分

通过split_train_val.py得到trainval.txt、val.txt、test.txt

代码语言:javascript
代码运行次数:0
复制
# coding:utf-8

import os
import random
import argparse

parser = argparse.ArgumentParser()
#xml文件的地址,根据自己的数据进行修改 xml一般存放在Annotations下
parser.add_argument('--xml_path', default='Annotations', type=str, help='input xml label path')
#数据集的划分,地址选择自己数据下的ImageSets/Main
parser.add_argument('--txt_path', default='ImageSets/Main', type=str, help='output txt label path')
opt = parser.parse_args()

trainval_percent = 0.9
train_percent = 0.8
xmlfilepath = opt.xml_path
txtsavepath = opt.txt_path
total_xml = os.listdir(xmlfilepath)
if not os.path.exists(txtsavepath):
    os.makedirs(txtsavepath)

num = len(total_xml)
list_index = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list_index, tv)
train = random.sample(trainval, tr)

file_trainval = open(txtsavepath + '/trainval.txt', 'w')
file_test = open(txtsavepath + '/test.txt', 'w')
file_train = open(txtsavepath + '/train.txt', 'w')
file_val = open(txtsavepath + '/val.txt', 'w')

for i in list_index:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        file_trainval.write(name)
        if i in train:
            file_train.write(name)
        else:
            file_val.write(name)
    else:
        file_test.write(name)

file_trainval.close()
file_train.close()
file_val.close()
file_test.close()

1.2 通过voc_label.py生成对应的label

代码语言:javascript
代码运行次数:0
复制
# -*- coding: utf-8 -*-
import xml.etree.ElementTree as ET
import os
from os import getcwd

sets = ['trainval', 'test']
classes = ["Target"]   # 改成自己的类别
abs_path = os.getcwd()
print(abs_path)

def convert(size, box):
    dw = 1. / (size[0])
    dh = 1. / (size[1])
    x = (box[0] + box[1]) / 2.0 - 1
    y = (box[2] + box[3]) / 2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return x, y, w, h

def convert_annotation(image_id):
    in_file = open('Annotations/%s.xml' % (image_id), encoding='UTF-8')
    out_file = open('labels/%s.txt' % (image_id), 'w')
    tree = ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)
    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        #difficult = obj.find('Difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult) == 1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
             float(xmlbox.find('ymax').text))
        b1, b2, b3, b4 = b
        # 标注越界修正
        if b2 > w:
            b2 = w
        if b4 > h:
            b4 = h
        b = (b1, b2, b3, b4)
        bb = convert((w, h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')

wd = getcwd()
for image_set in sets:
    if not os.path.exists('labels/'):
        os.makedirs('labels/')
    image_ids = open('ImageSets/Main/%s.txt' % (image_set)).read().strip().split()
    list_file = open('%s.txt' % (image_set), 'w')
    for image_id in image_ids:
        list_file.write(abs_path + '/images/%s.png\n' % (image_id))
        convert_annotation(image_id)
    list_file.close()

1.3红外小目标数据集分析

左上角图是训练集的数据量,每个类别有多少个

右上图是框的尺寸和数量

左下脚图是中心点相对于整幅图的位置

右下图是图中目标相对于整幅图的高宽比例

表示中心点坐标x和y,以及框的高宽间的关系。

每一行的最后一幅图代表的是x,y,宽和高的分布情况:

最上面的图(0,0)表明中心点横坐标x的分布情况,可以看到大部分集中在整幅图的中心位置;

(1,1)图表明中心点纵坐标y的分布情况,可以看到大部分集中在整幅图的中心位置 (2,2)图表明框的宽的分布情况,可以看到大部分框的宽的大小大概是整幅图的宽的一半 (3,3)图表明框的宽的分布情况,可以看到大部分框的高的大小超过整幅图的高的一半

2.基于Yolov8的红外小目标检测

2.1 超参数设置

default.yaml

代码语言:javascript
代码运行次数:0
复制
# Ultralytics YOLO 🚀, GPL-3.0 license
# Default training settings and hyperparameters for medium-augmentation COCO training

task: detect  # YOLO task, i.e. detect, segment, classify, pose
mode: train  # YOLO mode, i.e. train, val, predict, export, track, benchmark

# Train settings -------------------------------------------------------------------------------------------------------
model:  # path to model file, i.e. yolov8n.pt, yolov8n.yaml
data:  # path to data file, i.e. coco128.yaml
epochs: 200  # number of epochs to train for
patience: 50  # epochs to wait for no observable improvement for early stopping of training
batch: 32  # number of images per batch (-1 for AutoBatch)
imgsz: 640  # size of input images as integer or w,h
save: True  # save train checkpoints and predict results
save_period: -1 # Save checkpoint every x epochs (disabled if < 1)
cache: False  # True/ram, disk or False. Use cache for data loading
device:  # device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
workers: 0  # number of worker threads for data loading (per RANK if DDP)
project:  # project name
name:  # experiment name, results saved to 'project/name' directory
exist_ok: False  # whether to overwrite existing experiment
pretrained: False  # whether to use a pretrained model
optimizer: SGD  # optimizer to use, choices=['SGD', 'Adam', 'AdamW', 'RMSProp']
verbose: True  # whether to print verbose output
seed: 0  # random seed for reproducibility
deterministic: True  # whether to enable deterministic mode
single_cls: False  # train multi-class data as single-class
image_weights: False  # use weighted image selection for training
rect: False  # support rectangular training if mode='train', support rectangular evaluation if mode='val'
cos_lr: False  # use cosine learning rate scheduler
close_mosaic: 10  # disable mosaic augmentation for final 10 epochs
resume: False  # resume training from last checkpoint
amp: True  # Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check
# Segmentation
overlap_mask: True  # masks should overlap during training (segment train only)
mask_ratio: 4  # mask downsample ratio (segment train only)
# Classification
dropout: 0.0  # use dropout regularization (classify train only)

# Val/Test settings ----------------------------------------------------------------------------------------------------
val: True  # validate/test during training
split: val  # dataset split to use for validation, i.e. 'val', 'test' or 'train'
save_json: False  # save results to JSON file
save_hybrid: False  # save hybrid version of labels (labels + additional predictions)
conf:  # object confidence threshold for detection (default 0.25 predict, 0.001 val)
iou: 0.7  # intersection over union (IoU) threshold for NMS
max_det: 300  # maximum number of detections per image
half: False  # use half precision (FP16)
dnn: False  # use OpenCV DNN for ONNX inference
plots: True  # save plots during train/val

# Prediction settings --------------------------------------------------------------------------------------------------
source:  # source directory for images or videos
show: False  # show results if possible
save_txt: False  # save results as .txt file
save_conf: False  # save results with confidence scores
save_crop: False  # save cropped images with results
hide_labels: False  # hide labels
hide_conf: False  # hide confidence scores
vid_stride: 1  # video frame-rate stride
line_thickness: 3  # bounding box thickness (pixels)
visualize: False  # visualize model features
augment: False  # apply image augmentation to prediction sources
agnostic_nms: False  # class-agnostic NMS
classes:  # filter results by class, i.e. class=0, or class=[0,2,3]
retina_masks: False  # use high-resolution segmentation masks
boxes: True  # Show boxes in segmentation predictions

# Export settings ------------------------------------------------------------------------------------------------------
format: torchscript  # format to export to
keras: False  # use Keras
optimize: False  # TorchScript: optimize for mobile
int8: False  # CoreML/TF INT8 quantization
dynamic: False  # ONNX/TF/TensorRT: dynamic axes
simplify: False  # ONNX: simplify model
opset:  # ONNX: opset version (optional)
workspace: 4  # TensorRT: workspace size (GB)
nms: False  # CoreML: add NMS

# Hyperparameters ------------------------------------------------------------------------------------------------------
lr0: 0.01  # initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lrf: 0.01  # final learning rate (lr0 * lrf)
momentum: 0.937  # SGD momentum/Adam beta1
weight_decay: 0.0005  # optimizer weight decay 5e-4
warmup_epochs: 3.0  # warmup epochs (fractions ok)
warmup_momentum: 0.8  # warmup initial momentum
warmup_bias_lr: 0.1  # warmup initial bias lr
box: 7.5  # box loss gain
cls: 0.5  # cls loss gain (scale with pixels)
dfl: 1.5  # dfl loss gain
fl_gamma: 0.0  # focal loss gamma (efficientDet default gamma=1.5)
label_smoothing: 0.0  # label smoothing (fraction)
nbs: 64  # nominal batch size
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 0.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.5  # image scale (+/- gain)
shear: 0.0  # image shear (+/- deg)
perspective: 0.0  # image perspective (+/- fraction), range 0-0.001
flipud: 0.0  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 1.0  # image mosaic (probability)
mixup: 0.0  # image mixup (probability)
copy_paste: 0.0  # segment copy-paste (probability)

# Custom config.yaml ---------------------------------------------------------------------------------------------------
cfg:  # for overriding defaults.yaml

# Debug, do not modify -------------------------------------------------------------------------------------------------
v5loader: False  # use legacy YOLOv5 dataloader

# Tracker settings ------------------------------------------------------------------------------------------------------
tracker: botsort.yaml  # tracker type, ['botsort.yaml', 'bytetrack.yaml']

2.2 开启训练

代码语言:javascript
代码运行次数:0
复制
yolo detect train model=yolov8s.yaml data=ultralytics/datasets/InfraRedSmallTarget.yaml

3.结果分析

map@0.5 为0.755

layers

parameters

GFLOPs

kb

mAP50

yolov8

168

3005843

8.1

6103

0.755

3.魔改创新介绍

💡💡💡Yolov8魔术师,独家首发创新(原创),持续更新,适用于Yolov5、Yolov7、Yolov8等各个Yolo系列,专栏文章提供每一步步骤和源码,轻松带你上手魔改网络

💡💡💡重点:通过本专栏的阅读,后续你也可以自己魔改网络,在网络不同位置(Backbone、head、detect、loss等)进行魔改,实现创新!!!

专栏介绍:

✨✨✨原创魔改网络、复现前沿论文,组合优化创新

🚀🚀🚀小目标、遮挡物、难样本性能提升

🍉🍉🍉持续更新中,定期更新不同数据集涨点情况

本专栏提供每一步改进步骤和源码,开箱即用,在你的数据集下轻松涨点,

通过注意力机制、小目标检测、Backbone&Head优化、 IOU&Loss优化、优化器改进、卷积变体改进、轻量级网络结合yolov8等方面进行展开

23年最新优化点,创新十足:

1. ICLR2023轻量高效注意力模块Sea_AttentionBlock

2.ICASSP2023 EMA基于跨空间学习的高效多尺度注意力

3.CVPR 2023 BiFormer: 基于动态稀疏注意力构建高效金字塔网络架构

4.原创独家首发 | 可变形自注意力Attention

5. ICCV 2023LSKNet:遥感旋转目标检测新SOTA | LSKblockAttention助力小目标检测

6.原创独家首发 | 多维协作注意模块MCA

7.通道优先卷积注意力(CPCA),2023

8.可变形大核注意力,超越自注意力| 2023.8月最新发表

9.华为诺亚2023极简的神经网络模型 VanillaNet---VanillaBlock助力检测

10. ICCV 2023 最新开源移动端网络架构 RepViT

11. ELSEVIER 2023 MPDIoU新型边界框相似度度量

12.CVPR2023 InceptionNeXt,助力小目标检测

13.谷歌2023强势推出优化器Lion

14.Adam该换了!斯坦福2023最新Sophia优化器,比Adam快2倍

15.CVPR2023 InternImage:注入新机制,扩展DCNv3

16.CVPR2023 FasterNet远超ShuffleNet、MobileNet、MobileViT,引入PConv结构

17.CVPR2023 SCConv:空间和通道重建卷积

18. 可变形大核注意力,超越自注意力| 2023.8月最新发表

19.动态蛇形卷积(Dynamic Snake Convolution) | ICCV2023

20.Dual-ViT:一种多尺度双视觉Transformer ,Dualattention助力检测| 顶刊TPAMI 2023

21.大型分离卷积注意力模块( Large Separable Kernel Attention),实现暴力涨点同时显著减少计算复杂性和内存 | 2023.8月最新发表

22.全网原创首发 | 多尺度空洞注意力(MSDA) | 中科院一区顶刊 DilateFormer 2023.9

注意力机制,开箱即用:

1.注意力机制:捕捉空间上的局部关系和全局关系,CoordAttention

2.注意力机制:跨模态Transformer 注意力---CoTAttention

3.注意力机制:改机的自注意机制Polarized Self-Attention

4.SimAM(无参Attention)和NAM(基于标准化的注意力模块):

5.注意力机制:双重注意力机制DoubleAttention、多个 SK 块的堆叠SKAttention

6.上下文增强和特征细化网络ContextAggregation

7. ICLR2023轻量高效注意力模块Sea_AttentionBlock

8.ICASSP2023 EMA基于跨空间学习的高效多尺度注意力

9.CVPR 2023 BiFormer: 基于动态稀疏注意力构建高效金字塔网络架构

10.MobileViTAttention,MobileViT移动端轻量通用视觉transformer

11.原创独家首发 | 可变形自注意力Attention

12. ICCV 2023LSKNet:遥感旋转目标检测新SOTA | LSKblockAttention助力小目标检测

13.原创独家首发 | 多维协作注意模块MCA

14.感受野注意力卷积运算(RFAConv)

15.通道优先卷积注意力(CPCA),2023

16.non-local注意力

17.可变形大核注意力,超越自注意力| 2023.8月最新发表

18.大型分离卷积注意力模块( Large Separable Kernel Attention),实现暴力涨点同时显著减少计算复杂性和内存 | 2023.8月最新发表

19.全网原创首发 | 多尺度空洞注意力(MSDA) | 中科院一区顶刊 DilateFormer 2023.9

20.新颖的多尺度卷积注意力(MSCA),即插即用,助力小目标检测 | NeurIPS2022

Backbone&Head优化:

1.小目标到大目标一网打尽,轻骨干重Neck的轻量级目标检测器GiraffeDet

2.加权双向特征金字塔网络

3.华为诺亚2023极简的神经网络模型 VanillaNet---VanillaBlock助力检测

4. ICCV 2023 最新开源移动端网络架构 RepViT

5.渐近特征金字塔网络(AFPN)

6.轻量级Slim-Neck | 即插即用系列

7.RevCol,大模型架构设计新范式 | ICLR 2023

8. 独家创新(Partial_C_Detect)检测头结构创新,实现涨点 | 检测头新颖创新系列

9. 独家创新(SC_C_Detect)检测头结构创新,实现涨点 | 检测头新颖创新系列

IOU&Loss优化:

1.引入WIoU,SIoU,EIoU,α-IoU,不同数据集验证能涨点

2.Wasserstein Distance Loss

3.引入Soft-NMS,提升密集遮挡场景检测精度:

4.引入Soft-NMS并结合各个IOU变体GIOU、DIOU、CIOU、EIOU、SIOU

5. ELSEVIER 2023 MPDIoU新型边界框相似度度量

6. SlideLoss,解决简单样本和困难样本之间的不平衡问题

7. SlideLoss创新升级,结合IOU动态调整困难样本的困难程度,提升小目标、遮挡物性能

小目标性能提升:

1.CVPR2023 InceptionNeXt,助力小目标检测

2. 小目标遮挡物性能提升(SEAM、MultiSEAM):

3.多头检测头提升小目标检测精度

4.NWD 应用于基于锚的检测器中的标签分配、NMS 和损失函数来设计强大的微小物体检测器:

5.SPD-Conv,低分辨率图像和小物体涨点明显

6. ECVBlock的小目标检测,即插即用,助力检测涨点

7.微小目标检测的上下文增强和特征细化网络

8.BIFPN,对小目标涨点显著

9.ODConv+ConvNeXt提升小目标检测能力

10.引入CVPR 2023 BiFormer,对小目标涨点明显

11.渐近特征金字塔网络(AFPN)

优化器改进:

1.谷歌2023强势推出优化器Lion

2.Adam该换了!斯坦福2023最新Sophia优化器,比Adam快2倍

卷积变体改进:

1.卷积变体DCNV2

2.SPD-Conv

3.卷积块NCB和创新Transformer 块NTB

4.CVPR2023 InternImage:注入新机制,扩展DCNv3

5.CVPR2023 FasterNet远超ShuffleNet、MobileNet、MobileViT,引入PConv结构

6.CVPR2023 SCConv:空间和通道重建卷积

7. 可变形大核注意力,超越自注意力| 2023.8月最新发表

8.动态蛇形卷积(Dynamic Snake Convolution) | ICCV2023

9.大型分离卷积注意力模块( Large Separable Kernel Attention),实现暴力涨点同时显著减少计算复杂性和内存 | 2023.8月最新发表

10.全网原创首发 | 多尺度空洞注意力(MSDA) | 中科院一区顶刊 DilateFormer 2023.9

轻量级网络结合yolov8:

1.华为Ghostnet,超越谷歌MobileNet | CVPR2020

2.华为Ghostnetv2,端侧小模型性能新SOTA | NeurIPS22 Spotlight

3.华为GhostNet再升级,全系列硬件上最优极简AI网络G_ghost | IJCV22

4.RepGhost,通过重参数化实现硬件高效的Ghost模块

详见:

代码语言:javascript
代码运行次数:0
复制
https://blog.csdn.net/m0_63774211/article/details/131519223

我正在参与2023腾讯技术创作特训营第三期有奖征文,组队打卡瓜分大奖!

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1. 红外弱小目标数据集
    • 1.1 数据集划分
    • 1.2 通过voc_label.py生成对应的label
    • 1.3红外小目标数据集分析
  • 2.基于Yolov8的红外小目标检测
    • 2.1 超参数设置
    • 2.2 开启训练
  • 3.结果分析
  • 3.魔改创新介绍
    • 23年最新优化点,创新十足:
    • 注意力机制,开箱即用:
    • Backbone&Head优化:
    • IOU&Loss优化:
    • 小目标性能提升:
    • 优化器改进:
    • 轻量级网络结合yolov8:
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档