Loading [MathJax]/jax/output/CommonHTML/config.js
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >视频目标检测识别

视频目标检测识别

作者头像
全栈程序员站长
发布于 2022-06-30 11:09:52
发布于 2022-06-30 11:09:52
1.4K00
代码可运行
举报
运行总次数:0
代码可运行

大家好,又见面了,我是你们的朋友全栈君。

之前文章目标检测API 已经介绍过API的基本使用,这里就不赘述了,直接上本次内容的代码了,添加的内容并不多。将测试的test.mp4原文件放到models-master\research\object_detection路径下,并创建一个detect_video.py文件,代码内容如下:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import os
import cv2
import time
import argparse
import multiprocessing
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
import matplotlib
# Matplotlib chooses Xwindows backend by default.
matplotlib.use('Agg')
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

'''
视频目标追踪
'''

# Path to frozen detection graph. This is the actual model that is used for the object detection.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
PATH_TO_CKPT = os.path.join(MODEL_NAME, 'frozen_inference_graph.pb')

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

NUM_CLASSES = 90

label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

def detect_objects(image_np, sess, detection_graph):
    # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
    image_np_expanded = np.expand_dims(image_np, axis=0)
    image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')

    # Each box represents a part of the image where a particular object was detected.
    boxes = detection_graph.get_tensor_by_name('detection_boxes:0')

    # Each score represent how level of confidence for each of the objects.
    # Score is shown on the result image, together with the class label.
    scores = detection_graph.get_tensor_by_name('detection_scores:0')
    classes = detection_graph.get_tensor_by_name('detection_classes:0')
    num_detections = detection_graph.get_tensor_by_name('num_detections:0')

    # Actual detection.
    (boxes, scores, classes, num_detections) = sess.run(
        [boxes, scores, classes, num_detections],
        feed_dict={image_tensor: image_np_expanded})

    # Visualization of the results of a detection.
    vis_util.visualize_boxes_and_labels_on_image_array(
        image_np,
        np.squeeze(boxes),
        np.squeeze(classes).astype(np.int32),
        np.squeeze(scores),
        category_index,
        use_normalized_coordinates=True,
        line_thickness=8)
    return image_np

#Load a frozen TF model
detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.GraphDef()
    with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
        serialized_graph = fid.read()
        od_graph_def.ParseFromString(serialized_graph)
        tf.import_graph_def(od_graph_def, name='')



#import imageio
#imageio.plugins.ffmpeg.download()
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML

def process_image(image):
    # NOTE: The output you return should be a color image (3 channel) for processing video below
    # you should return the final output (image with lines are drawn on lanes)
    with detection_graph.as_default():
        with tf.Session(graph=detection_graph) as sess:
            # 如果出现错误:ValueError: assignment destination is read-only,则将下面一行改为:
            #  image_process = detect_objects(np.array(image), sess, detection_graph)
            image_process = detect_objects(image, sess, detection_graph)
            return image_process

white_output = 'test_out.mp4'
clip1 = VideoFileClip("test.mp4").subclip(1,9)
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!s
white_clip.write_videofile(white_output, audio=False)

HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(white_output))

检测结果:

更新一个独立的检测现有视频脚本,这样可以方便在任意路径使用:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import tensorflow as tf
import cv2 as cv
import time

#Load a frozen TF model
detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.GraphDef()
    with tf.gfile.GFile('./frozen_inference_graph.pb', 'rb') as fid:
        serialized_graph = fid.read()
        od_graph_def.ParseFromString(serialized_graph)
        tf.import_graph_def(od_graph_def, name='')

def detect_objects(image, sess, detection_graph):

    height = image.shape[0]   
    width = image.shape[1]    
    channel = image.shape[2]  
    start_time = time.time()
    # Run the model
    out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
                    sess.graph.get_tensor_by_name('detection_scores:0'),
                    sess.graph.get_tensor_by_name('detection_boxes:0'),
                    sess.graph.get_tensor_by_name('detection_classes:0')],
                    feed_dict={'image_tensor:0': image.reshape(1, height, width, channel)})

    end_time = time.time()
    runtime = end_time - start_time
    print('run time:%f' % (runtime * 1000) + 'ms')

    # Visualize detected bounding boxes.
    num_detections = int(out[0][0])

    # Iterate through the number of checked out rectangular boxes on the picture
    for i in range(num_detections):
        classId = int(out[3][0][i])
        score = float(out[1][0][i])
        bbox = [float(v) for v in out[2][0][i]]

        if score > 0.8:  # 这里的阈值自行修改即可
            #print(score)
            x = bbox[1] * width
            y = bbox[0] * height
            right = bbox[3] * width
            bottom = bbox[2] * height
            # Draw rectangular box
            font = cv.FONT_HERSHEY_SIMPLEX  # Use default fonts
            cv.rectangle(image, (int(x), int(y)), (int(right), int(bottom)), (0, 0, 255), thickness=2)
            cv.putText(image, '{}:'.format(classId) + str(('%.3f' % score)), (int(x), int(y - 9)), font, 0.6,
                        (0, 0, 255), 1)
    return image


def process_image(image):
    # NOTE: The output you return should be a color image (3 channel) for processing video below
    # you should return the final output (image with lines are drawn on lanes)
    with detection_graph.as_default():
        with tf.Session(graph=detection_graph) as sess:
            image_process = detect_objects(image, sess, detection_graph)
            return image_process

white_output = 'test_out.mp4'
# 使用 VideoFileClip 函数从视频中抓取图片,subclip(1,9)代表识别视频中1-9s这一时间段
clip1 = VideoFileClip("test.mp4").subclip(1,9)
# 用fl_image函数将原图片替换为修改后的图片,用于传递物体识别的每张抓取图片
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
# 修改的剪辑图像被组合成为一个新的视频
white_clip.write_videofile(white_output, audio=False)

HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(white_output))

上面的对现有的视频中目标进行检测的,那么怎样实时的对现实生活中的目标进行检测呢?这个其实也很简单,我们来创建一个object_detection_tutorial_video.py 文件,具体的代码如下:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import numpy as np  
import os  
import six.moves.urllib as urllib  
import sys  
import tarfile  
import tensorflow as tf  
import zipfile  
import matplotlib  
import cv2
# Matplotlib chooses Xwindows backend by default.  
matplotlib.use('Agg')  

from collections import defaultdict  
from io import StringIO  
from matplotlib import pyplot as plt  
from PIL import Image  
from utils import label_map_util  
from utils import visualization_utils as vis_util  

'''
    检测视频中的目标
'''

cap = cv2.VideoCapture(0)  #打开摄像头

##################### Download Model  
# What model to download.  
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'  
MODEL_FILE = MODEL_NAME + '.tar.gz'  
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'  
  
# Path to frozen detection graph. This is the actual model that is used for the object detection.  
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'  
  
# List of the strings that is used to add correct label for each box.  
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')  
  
NUM_CLASSES = 90  
  
# Download model if not already downloaded  
if not os.path.exists(PATH_TO_CKPT):  
    print('Downloading model... (This may take over 5 minutes)')  
    opener = urllib.request.URLopener()  
    opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)  
    print('Extracting...')  
    tar_file = tarfile.open(MODEL_FILE)  
    for file in tar_file.getmembers():  
        file_name = os.path.basename(file.name)  
        if 'frozen_inference_graph.pb' in file_name:  
            tar_file.extract(file, os.getcwd())  
else:  
    print('Model already downloaded.')  
  
##################### Load a (frozen) Tensorflow model into memory.  
print('Loading model...')  
detection_graph = tf.Graph()  
  
with detection_graph.as_default():  
    od_graph_def = tf.GraphDef()  
    with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:  
        serialized_graph = fid.read()  
        od_graph_def.ParseFromString(serialized_graph)  
        tf.import_graph_def(od_graph_def, name='')  
  
##################### Loading label map  
print('Loading label map...')  
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)  
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)  
category_index = label_map_util.create_category_index(categories)  
  
##################### Helper code  
def load_image_into_numpy_array(image):  
  (im_width, im_height) = image.size  
  return np.array(image.getdata()).reshape(  
      (im_height, im_width, 3)).astype(np.uint8)  
  
##################### Detection ###########
  
print('Detecting...')  
with detection_graph.as_default():  
    with tf.Session(graph=detection_graph) as sess:
        
        # print(TEST_IMAGE_PATH)
        # image = Image.open(TEST_IMAGE_PATH)
        # image_np = load_image_into_numpy_array(image)
        while True:                              
            ret, image_np = cap.read()           #从摄像头中获取每一帧图像
            image_np_expanded = np.expand_dims(image_np, axis=0)
            image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
            boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
            scores = detection_graph.get_tensor_by_name('detection_scores:0')
            classes = detection_graph.get_tensor_by_name('detection_classes:0')
            num_detections = detection_graph.get_tensor_by_name('num_detections:0')
            # Actual detection.
            (boxes, scores, classes, num_detections) = sess.run(
            [boxes, scores, classes, num_detections],
                feed_dict={image_tensor: image_np_expanded})
             # Print the results of a detection.
            print(scores)
            print(classes)
            print(category_index)
            vis_util.visualize_boxes_and_labels_on_image_array(
                image_np,
                np.squeeze(boxes),
                np.squeeze(classes).astype(np.int32),
                np.squeeze(scores),
                category_index,
                use_normalized_coordinates=True,
                line_thickness=8)

            cv2.imshow('object detection', cv2.resize(image_np, (800, 600)))
			#cv2.waitKey(0)
            if cv2.waitKey(25) & 0xFF == ord('q'):
                cv2.destroyAllWindows()
                break

代码中只是添加了摄像头来获取每一帧图像,处理方式和静态的图片差不多,这里就不多说了。这里就不上测试的结果了,大家课可以实际的跑一下程序即可看到结果。


更新 2020.05.04

更新一个单独运行的实时获取摄像头进行检测脚本:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import argparse
import tensorflow as tf
import numpy as np
import time
import cv2 as cv

'''
    video det


    use:

    python Video.py \
        --model=xxx.pb \
        --threshold=0.65
      
'''

# os.environ['CUDA_VISIBLE_DEVICES'] = "0"

parser = argparse.ArgumentParser('TensorFlow')

parser.add_argument('--model', required=True, help='pb file')
parser.add_argument('--threshold', type=float, required=True, help='Detection threshold')
args = parser.parse_args()

# open camera
cap = cv.VideoCapture(0)
if not cap.isOpened():
    print("cannot open camera")
    exit()

# Read the graph.
with tf.gfile.FastGFile(args.model, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

config = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)

config.gpu_options.allow_growth = True

with tf.Session(config=config) as sess:
    # Restore session
    sess.graph.as_default()
    tf.import_graph_def(graph_def, name='')

    while True:
        ret, image_np = cap.read()
        if not ret:
            print("Cant't receive frame. Exiting....")
            break

        height = image_np.shape[0]
        width = image_np.shape[1]
        channel = image_np.shape[2]

        image_np_expanded = np.expand_dims(image_np, axis=0)

        start_time = time.time()
        # Run the model
        out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
                        sess.graph.get_tensor_by_name('detection_scores:0'),
                        sess.graph.get_tensor_by_name('detection_boxes:0'),
                        sess.graph.get_tensor_by_name('detection_classes:0')],
                       feed_dict={'image_tensor:0': image_np_expanded})

        end_time = time.time()
        runtime = end_time - start_time
        print('run time:%f' % (runtime * 1000) + 'ms')

        # Visualize detected bounding boxes.
        num_detections = int(out[0][0])

        for i in range(num_detections):
            classId = int(out[3][0][i])
            score = float(out[1][0][i])

            bbox = [float(v) for v in out[2][0][i]]
            if score > args.threshold:
                x = bbox[1] * width
                y = bbox[0] * height
                right = bbox[3] * width
                bottom = bbox[2] * height
                # draw boxes
                font = cv.FONT_HERSHEY_SIMPLEX
                cv.rectangle(image_np, (int(x), int(y)), (int(right), int(bottom)), (0, 0, 255), thickness=2)
                cv.putText(image_np, '{}:'.format(classId) + str(('%.3f' % score)), (int(x), int(y - 9)), font, 0.6,
                           (0, 0, 255), 1)

                cv.imshow('object detection', cv.resize(image_np, (800, 600)))

        if cv.waitKey(1) == ord('q'):
            break

    cap.release()
    cv.destroyAllWindows()

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/132180.html原文链接:https://javaforall.cn

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
行人检测--OpenCV与TensorFlow SSD对比
OpenCV行人检测我们使用HOG特征提取+SVM训练,使用默认API检测,详细了解可参考:https://zhuanlan.zhihu.com/p/75705284
Color Space
2020/01/13
3.4K0
行人检测--OpenCV与TensorFlow SSD对比
使用Tensorflow Object Detection API实现对象检测
Tensorflow Object Detection API自从发布以来,其提供预训练模型也是不断更新发布,功能越来越强大,对常见的物体几乎都可以做到实时准确的检测,对应用场景相对简单的视频分析与对象检测提供了极大的方便与更多的技术方案选择。tensorflow object detection提供的预训练模型都是基于以下三个数据集训练生成,它们是:
OpenCV学堂
2018/07/26
1.1K0
使用Tensorflow Object Detection API实现对象检测
体感游戏 | 手势识别玩飞机大战游戏(四) 使用深度学习实现手势识别玩飞机大战游戏
今天是第四部分:使用深度学习实现手势识别玩飞机大战游戏的功能。这里标题我把TensorFlow实现改为了深度学习实现,这里识别手势主要用到的是目标检测,当然不止TensorFlow可以实现,其他能够做到实时的目标检测网络也很多,比如最近比较火的YoloV4/V5。这里演示我为了方便,就用我以前训练好的SSD手势识别模型,你只需要使用下面的代码,加载模型就可以测试:
Color Space
2021/01/22
1.8K0
Tensorflow Object Detection API 终于支持tensorflow1.x与tensorflow2.x了
基于tensorflow框架构建的快速对象检测模型构建、训练、部署框架,是针对计算机视觉领域对象检测任务的深度学习框架。之前tensorflow2.x一直不支持该框架,最近Tensorflow Object Detection API框架最近更新了,同时支持tensorflow1.x与tensorflow2.x。其中model zoo方面,tensorflow1.x基于COCO数据集预训练支持对象检测模型包括:
OpenCV学堂
2020/09/08
1.2K0
Tensorflow Object Detection API 终于支持tensorflow1.x与tensorflow2.x了
tensorflow object detection API使用之GPU训练实现宠物识别
之前写过几篇关于tensorflow object detection API使用的相关文章分享,收到不少关注与鼓励,所以决定再写一篇感谢大家肯定与支持。在具体介绍与解释之前,首先简单说一下本人测试与运行的系统与软件环境与版本
OpenCV学堂
2019/11/13
2.4K1
基于OpenCV与tensorflow实现实时手势识别
基于OpenCV与tensorflow object detection API使用迁移学习,基于SSD模型训练实现手势识别完整流程,涉及到数据集收集与标注、VOC2012数据集制作,tfrecord数据生成、SSD迁移学习与模型导出,OpenCV摄像头实时视频流读取与检测处理,整个过程比较长,操作步骤比较多,这里说一下主要阶段与关键注意点。
OpenCV学堂
2018/09/29
5.5K1
基于OpenCV与tensorflow实现实时手势识别
ubunu16.04 TensorFlow object detection API 应用配置
TensorFlow object detection API应用–配置 主要参考 : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md https://www.cnblogs.com/zongfa/p/9662832.html
用户1148525
2019/05/26
5750
tensorflow Object Detection API使用预训练模型mask r-cnn实现对象检测
Mask R-CNN是何凯明大神在2017年整出来的新网络模型,在原有的R-CNN基础上实现了区域ROI的像素级别分割。关于Mask R-CNN模型本身的介绍与解释网络上面已经是铺天盖地了,论文也是到处可以看到。这里主要想介绍一下在tensorflow中如何使用预训练的Mask R-CNN模型实现对象检测与像素级别的分割。tensorflow框架有个扩展模块叫做models里面包含了很多预训练的网络模型,提供给tensorflow开发者直接使用或者迁移学习使用,首先需要下载Mask R-CNN网络模型,这个在tensorflow的models的github上面有详细的解释与model zoo的页面介绍, tensorflow models的github主页地址如下: https://github.com/tensorflow/models
OpenCV学堂
2018/09/29
5.8K2
tensorflow Object Detection API使用预训练模型mask r-cnn实现对象检测
MaskRCNN 基于OpenCV DNN的目标检测与实例分割
这里主要记录基于 OpenCV 4.x DNN 模块和 TensorFlow MaskRCNN 开源模型的目标检测与实例分割 的实现.
AIHGF
2019/05/13
1.9K0
MaskRCNN 基于OpenCV DNN的目标检测与实例分割
Python 数据科学入门教程:TensorFlow 目标检测
你好,欢迎阅读 TensorFlow 目标检测 API 迷你系列。 这个 API 可以用于检测图像和/或视频中的对象,带有使用边界框,使用可用的一些预先训练好的模型,或者你自己可以训练的模型(API 也变得更容易)。
ApacheCN_飞龙
2022/12/01
1.5K0
Python 数据科学入门教程:TensorFlow 目标检测
利用Tensorflow构建自己的物体识别模型(一)
利用Tensorflow训练搭建自己的物体训练模型,万里长征第一步,先安装tensorflow。
月小水长
2019/07/31
6270
利用Tensorflow构建自己的物体识别模型(一)
Faster RCNN 基于 OpenCV DNN 的目标检测实现
在前面已经测试过 YOLOV3 和 SSD 基于 OpenCV DNN 的目标检测实现,这里再简单实现下 Faster RCNN 基于 DNN 的实现.
AIHGF
2019/05/13
1.1K0
Faster RCNN 基于 OpenCV DNN 的目标检测实现
深度学习入门篇--手把手教你用 TensorFlow 训练模型
该文介绍了如何使用TensorFlow实现YOLO v2神经网络模型对图像进行分类,并给出了代码示例和训练过程的详细步骤。
付越
2017/10/16
9.9K2
深度学习入门篇--手把手教你用 TensorFlow 训练模型
TensorFlow 目标检测模型转换为 OpenCV DNN 可调用格式
在 OpenCV4.X 版本(OpenCV3.4.1之后版本) 可以采用 cv2.dnn.readNetFromTensorflow(pbmodel, pbtxt) 函数直接调用 TensorFlow 训练的目标检测模型.
AIHGF
2019/05/13
2.6K0
TensorFlow 目标检测模型转换为 OpenCV DNN 可调用格式
目标检测_1
注:上编的路径尽量使用绝对路径,不要使用相对路径和~符号 可能报错 生成frozen_inference_graph.pb文件 及其他文件
Dean0731
2020/05/08
5550
Auto-Tinder-训练AI玩打火机刷卡游戏
Auto Tinder是一个纯粹出于娱乐和教育目的而创建的概念项目。绝不能滥用它来伤害任何人或向平台发送垃圾邮件。自动绑定脚本不应与您的绑定文件一起使用,因为它们肯定违反了绑定服务条款。
代码医生工作室
2019/10/31
1.7K0
Auto-Tinder-训练AI玩打火机刷卡游戏
[Tensorflow] 使用SSD-MobileNet训练模型
因为Android Demo里的模型是已经训练好的,模型保存的label都是固定的,所以我们在使用的时候会发现还有很多东西它识别不出来。那么我们就需要用它来训练我们自己的数据。下面就是使用SSD-MobileNet训练模型的方法。
wOw
2018/09/18
14K3
Mask-RCNN模型的实现自定义对象(无人机)检测
打开标注工具PixelAnnotation 选择好dataset路径之后,顺序开始标注数据即可!
OpenCV学堂
2019/08/09
1.9K2
Mask-RCNN模型的实现自定义对象(无人机)检测
干货 | tensorflow模型导出与OpenCV DNN中使用
Deep Neural Network - DNN 是OpenCV中的深度神经网络模块,支持基于深度学习模块前馈网络运行、实现图像与视频场景中的
OpenCV学堂
2019/04/29
5K0
干货 | tensorflow模型导出与OpenCV DNN中使用
使用TensorFlow一步步进行目标检测(5)
本教程进行到这一步,您选择了预训练的目标检测模型,转换现有数据集或创建自己的数据集并将其转换为TFRecord文件,修改模型配置文件,并开始训练模型。接下来,您需要保存模型并将其部署到项目中。
云水木石
2019/07/01
5520
使用TensorFlow一步步进行目标检测(5)
推荐阅读
相关推荐
行人检测--OpenCV与TensorFlow SSD对比
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
本文部分代码块支持一键运行,欢迎体验
本文部分代码块支持一键运行,欢迎体验