首页
学习
活动
专区
圈层
工具
发布
社区首页 >专栏 >YOLOV8原创改进:一种新型轻量级实时检测算法

YOLOV8原创改进:一种新型轻量级实时检测算法

原创
作者头像
AI小怪兽
修改2024-01-31 09:12:50
修改2024-01-31 09:12:50
1.2K0
举报
文章被收录于专栏:YOLO大作战YOLO大作战

💡💡💡本文独家改进:本文提出了一种新型轻量级的实时监测算法,通过MobileViT魔改整个backbone,最后提出两个改进版本,YOLOv8_MobileViT和YOLOv8_MobileViT-p2两个版本

💡💡💡YOLOv8s进行对比GFLOPs从原始的28.6降低至17.3和21.4

layers

parameters

GFLOPs

kb

yolov8s

225

11135971

28.6

87459

YOLOv8_MobileViT

447

4399732

17.3

34968

YOLOv8_MobileViT-p2

499

4314869

21.4

34489

1.原理介绍

原文作者基于YOLOv5进行优化设计,并取得较好的性能。

提出了一种基于YOLOV5框架的新型XM-YOLOViT雾天行人车辆实时检测算法,有效解决了密集目标干扰和雾霾遮挡问题,提高了复杂雾天环境下的检测效果。

摘要:提出了一种基于YOLOV5框架的新型XM-YOLOViT雾天行人车辆实时检测算法,有效解决了密集目标干扰和雾霾遮挡问题,提高了复杂雾天环境下的检测效果。首先,引入倒残差块和MobileViTV3块构建XM-net特征提取网络;其次,采用EIOU作为位置损失函数,在颈部区域增加高分辨率检测层;在数据方面,基于大气散射模型和暗通道先验,设计了一种将图像从无雾空间映射到有雾空间的雾化方法。最后,分别在不同雾环境下的4个数据集上验证了算法的有效性。实验结果表明,XM-YOLOViT模型的准确率、召回率和mAP分别为54.95%、41.93%和43.15%,F1-Score为0.474,分别提高了3.42%、7.08%、7.52%和13.94%,模型参数降低41.7%至4.09M, FLOPs为25.2G,检测速度为70.93 FPS。XM-YOLOViT模型比先进的YOLO探测器性能更好,F1-Score和mAP分别比YOLOv7-tiny提高了5.57%和3.65%,比YOLOv8s分别提高了2.38%和2.37%。因此,本文提出的XM-YOLOViT算法具有较高的检测精度和极轻的结构,可以有效提高无人机在雾天,特别是对极小目标的检测任务的效率和质量。

2.如何二次创新到YOLOv8

​2.1 yolov8_MobileViT.yaml

代码语言:python
复制
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]  # YOLOv8n summary: 225 layers,  3157200 parameters,  3157184 gradients,   8.9 GFLOPs
  s: [0.33, 0.50, 1024]  # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients,  28.8 GFLOPs
  m: [0.67, 0.75, 768]   # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients,  79.3 GFLOPs
  l: [1.00, 1.00, 512]   # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
  x: [1.00, 1.25, 512]   # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs

# YOLOv8.0n backbone
backbone:
  # [from, repeats, module, args]
   # conv1
  -  [-1, 1, ConvLayer, [16, 6, 2]]                                 # 0   2
    # layer1
  -  [-1, 1, InvertedResidual, [32, 1, 4]]                         # 1   2
    # layer2
  -  [-1, 1, InvertedResidual, [64, 2, 4]]                          # 2   4
  -  [-1, 2, InvertedResidual, [64, 1, 4]]                          # 3   4
    # layer3
  -  [-1, 1, InvertedResidual, [96, 2, 4]]                          # 4   8
  -  [-1, 1, MobileViTBlock, [144, 288, 2, 0, 0, 0, 2, 2]]        #  5   8
    # layer4
  -  [-1, 1, InvertedResidual, [128, 2, 4]]                        # 6   16
  -  [-1, 1, MobileViTBlock, [192, 384, 4, 0, 0, 0, 2, 2]]        # 7   16
    # layer5
  -  [-1, 1, InvertedResidual, [160, 2, 4]]                         # 8   32
  -  [-1, 1, MobileViTBlock, [240, 480, 3, 0, 0, 0, 2, 2]]       # 9   32
    # SPPF
  -  [ -1, 1, SPPF, [ 160, 5 ] ]                                    # 10

# YOLOv8.0n head
head:
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 7], 1, Concat, [1]]  # cat backbone P4
  - [-1, 3, C2f, [512]]  # 13

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 5], 1, Concat, [1]]  # cat backbone P3
  - [-1, 3, C2f, [256]]  # 16 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]]  # cat head P4
  - [-1, 3, C2f, [512]]  # 19 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]]  # cat head P5
  - [-1, 3, C2f, [1024]]  # 22 (P5/32-large)

  - [[16, 19, 22], 1, Detect, [nc]]  # Detect(P3, P4, P5)

​2.2 yolov8_MobileViT-p2.yaml

代码语言:python
复制
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]  # YOLOv8n summary: 225 layers,  3157200 parameters,  3157184 gradients,   8.9 GFLOPs
  s: [0.33, 0.50, 1024]  # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients,  28.8 GFLOPs
  m: [0.67, 0.75, 768]   # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients,  79.3 GFLOPs
  l: [1.00, 1.00, 512]   # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
  x: [1.00, 1.25, 512]   # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs

# YOLOv8.0n backbone
backbone:
  # [from, repeats, module, args]
   # conv1
  -  [-1, 1, ConvLayer, [16, 6, 2]]                                 # 0   2
    # layer1
  -  [-1, 1, InvertedResidual, [32, 1, 4]]                         # 1   2
    # layer2
  -  [-1, 1, InvertedResidual, [64, 2, 4]]                          # 2   4
  -  [-1, 2, InvertedResidual, [64, 1, 4]]                          # 3   4
    # layer3
  -  [-1, 1, InvertedResidual, [96, 2, 4]]                          # 4   8
  -  [-1, 1, MobileViTBlock, [144, 288, 2, 0, 0, 0, 2, 2]]        #  5   8
    # layer4
  -  [-1, 1, InvertedResidual, [128, 2, 4]]                        # 6   16
  -  [-1, 1, MobileViTBlock, [192, 384, 4, 0, 0, 0, 2, 2]]        # 7   16
    # layer5
  -  [-1, 1, InvertedResidual, [160, 2, 4]]                         # 8   32
  -  [-1, 1, MobileViTBlock, [240, 480, 3, 0, 0, 0, 2, 2]]       # 9   32
    # SPPF
  -  [ -1, 1, SPPF, [ 160, 5 ] ]                                    # 10

# YOLOv8.0-p2 head
head:
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 7], 1, Concat, [1]]  # cat backbone P4
  - [-1, 3, C2f, [512]]  # 13

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 5], 1, Concat, [1]]  # cat backbone P3
  - [-1, 3, C2f, [256]]  # 16 (P3/8-small)

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 3], 1, Concat, [1]]  # cat backbone P2
  - [-1, 3, C2f, [128]]  # 19 (P2/4-xsmall)

  - [-1, 1, Conv, [128, 3, 2]]
  - [[-1, 16], 1, Concat, [1]]  # cat head P3
  - [-1, 3, C2f, [256]]  # 22 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]]  # cat head P4
  - [-1, 3, C2f, [512]]  # 25 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]]  # cat head P5
  - [-1, 3, C2f, [1024]]  # 28 (P5/32-large)

  - [[19, 22, 25, 28], 1, Detect, [nc]]  # Detect(P2, P3, P4, P5)

by AI小怪兽

我正在参与2024腾讯技术创作特训营第五期有奖征文,快来和我瓜分大奖!

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1.原理介绍
  • 2.如何二次创新到YOLOv8
    • ​2.1 yolov8_MobileViT.yaml
    • ​2.2 yolov8_MobileViT-p2.yaml
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档