detectron2配置环境并训练自己的数据集

本文档详细记录了在Ubuntu 18.04系统中,使用NVIDIA RTX 2060显卡,CUDA 11.6和PyTorch 1.10.2进行Detectron2的安装配置过程。首先,根据PyTorch官网指导安装相应版本的torch。接着,按照Detectron2官方文档进行环境配置,解决CUDA路径问题。然后,通过demo.py测试预训练模型。之后,将VOC格式的数据集转换为COCO格式,并进行数据集划分。最后,使用fruitsnuts_data.py进行数据集注册,并通过train.py进行模型训练。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

目录

环境配置

准备torch环境

detectron2环境安装配置

测试demo.py

数据集准备

训练

训练脚本:

测试脚本:


环境配置

其实还是按照detectron2官方文档来的:

Installation — detectron2 0.6 documentation

准备torch环境

我这里硬件是2060 ubuntu18.04 cuda是最新的11.6,torch安装要按照torch官网给的命令装,官网给的命令可以帮你选好相关组件的版本:

官网地址:Start Locally | PyTorch

我这里命令最终是这样:

 因为我这里是11.6,只能选最高的CUDA11.3,命令拿出来在conda虚拟环境执行安装即可。

pip3 install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio==0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

detectron2环境安装配置

git clone https://github.com/facebookresearch/detectron2.git
#不要进入detectron2目录
python -m pip install -e detectron2
pip install opencv-python

有一次报:No such file or directory: ':/usr/local/cuda-11.1:/usr/local/cuda-11.1/bin/nvcc

虽然之前/etc/profile和~/.bashrc中都配置了并且另外一个虚拟环境正常使用detectron2的,说明cuda没问题,但是这里突然报找不到cuda有些懵。

 我这样执行了一下命令可以安装了:

#执行一下
export CUDA_HOME=/usr/local/cuda-11.1

测试demo.py

因为我们要训练faster rcnn模型,先下载对应的预训练模型先测试一下,注意对应一下config-file,模型下载地址在detectron2 的github中:

detectron2/MODEL_ZOO.md at main · facebookresearch/detectron2 · GitHub

测试命令: 

cd demo
python demo.py --config-file ../configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml --input 2.jpg  --opts MODEL.WEIGHTS ../model_final_280758.pkl

我这里ssh远程的,不能imshow,因此屏蔽了相关show代码,如不屏蔽会报QT平台不支持问题。

数据集准备

我这里用之前labelimg打好标签的数据集:

labelimg制作VOC数据集并用yolov5训练目标检测模型_RayChiu757374816的博客-CSDN博客_labelimg yolo格式

然后将这种VOC格式的转为COCO格式的:

import os
import random
import shutil
import sys
import json
import glob
import xml.etree.ElementTree as ET

"""
You only need to set the following three parts
1.val_files_num : num of validation samples from your all samples
2.test_files_num = num of test samples from your all samples
3.voc_annotations : path to your VOC dataset Annotations

"""
# 只需要该下边四个值即可
val_files_num = 100
test_files_num = 100
voc_annotations = '../VOCData/Annotations/'  # remember to modify the path
main_path = '../'

split = voc_annotations.split('/')
coco_name = split[-3]
del split[-3]
del split[-2]
del split[-1]
del split[0]
# print(split)
for i in split:
    main_path += '/' + i

main_path = main_path + '/'

# print(main_path)

coco_path = os.path.join(main_path, coco_name + '_COCO/')
coco_images = os.path.join(main_path, coco_name + '_COCO/images')
coco_json_annotations = os.path.join(main_path, coco_name + '_COCO/annotations/')
xml_val = os.path.join(main_path, 'xml', 'xml_val/')
xml_test = os.path.join(main_path, 'xml/', 'xml_test/')
xml_train = os.path.join(main_path, 'xml/', 'xml_train/')

voc_images = os.path.join(main_path, coco_name, 'JPEGImages/')


# from https://www.php.cn/python-tutorials-424348.html
def mkdir(path):
    path = path.strip()
    path = path.rstrip("\\")
    isExists = os.path.exists(path)
    if not isExists:
        os.makedirs(path)
        print(path + ' ----- folder created')
        return True
    else:
        print(path + ' ----- folder existed')
        return False


# foler to make, please enter full path


mkdir(coco_path)
mkdir(coco_images)
mkdir(coco_json_annotations)
mkdir(xml_val)
mkdir(xml_test)
mkdir(xml_train)

# voc images copy to coco images
for i in os.listdir(voc_images):
    img_path = os.path.join(voc_images + i)
    shutil.copy(img_path, coco_images)

    # voc images copy to coco images
for i in os.listdir(voc_annotations):
    img_path = os.path.join(voc_annotations + i)
    shutil.copy(img_path, xml_train)

print("\n\n %s files copied to %s" % (val_files_num, xml_val))

for i in range(val_files_num):
    if len(os.listdir(xml_train)) > 0:

        random_file = random.choice(os.listdir(xml_train))
        #         print("%d) %s"%(i+1,random_file))
        source_file = "%s/%s" % (xml_train, random_file)

        if random_file not in os.listdir(xml_val):
            shutil.move(source_file, xml_val)
        else:
            random_file = random.choice(os.listdir(xml_train))
            source_file = "%s/%s" % (xml_train, random_file)
            shutil.move(source_file, xml_val)
    else:
        print('The folders are empty, please make sure there are enough %d file to move' % (val_files_num))
        break

for i in range(test_files_num):
    if len(os.listdir(xml_train)) > 0:

        random_file = random.choice(os.listdir(xml_train))
        #         print("%d) %s"%(i+1,random_file))
        source_file = "%s/%s" % (xml_train, random_file)

        if random_file not in os.listdir(xml_test):
            shutil.move(source_file, xml_test)
        else:
            random_file = random.choice(os.listdir(xml_train))
            source_file = "%s/%s" % (xml_train, random_file)
            shutil.move(source_file, xml_test)
    else:
        print('The folders are empty, please make sure there are enough %d file to move' % (val_files_num))
        break

print("\n\n" + "*" * 27 + "[ Done ! Go check your file ]" + "*" * 28)

# !/usr/bin/python

# pip install lxml


START_BOUNDING_BOX_ID = 1
PRE_DEFINE_CATEGORIES = None

# If necessary, pre-define category and its id
#  PRE_DEFINE_CATEGORIES = {"aeroplane": 1, "bicycle": 2, "bird": 3, "boat": 4,
#  "bottle":5, "bus": 6, "car": 7, "cat": 8, "chair": 9,
#  "cow": 10, "diningtable": 11, "dog": 12, "horse": 13,
#  "motorbike": 14, "person": 15, "pottedplant": 16,
#  "sheep": 17, "sofa": 18, "train": 19, "tvmonitor": 20}

"""
main code below are from
https://github.com/Tony607/voc2coco
"""


def get(root, name):
    vars = root.findall(name)
    return vars


def get_and_check(root, name, length):
    vars = root.findall(name)
    if len(vars) == 0:
        raise ValueError("Can not find %s in %s." % (name, root.tag))
    if length > 0 and len(vars) != length:
        raise ValueError(
            "The size of %s is supposed to be %d, but is %d."
            % (name, length, len(vars))
        )
    if length == 1:
        vars = vars[0]
    return vars


def get_filename_as_int(filename):
    try:
        filename = filename.replace("\\", "/")
        filename = os.path.splitext(os.path.basename(filename))[0]
        return int(filename)
    except:
        raise ValueError("Filename %s is supposed to be an integer." % (filename))


def get_categories(xml_files):
    """Generate category name to id mapping from a list of xml files.
    Arguments:
        xml_files {list} -- A list of xml file paths.
    Returns:
        dict -- category name to id mapping.
    """
    classes_names = []
    for xml_file in xml_files:
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall("object"):
            classes_names.append(member[0].text)
    classes_names = list(set(classes_names))
    classes_names.sort()
    return {name: i for i, name in enumerate(classes_names)}


def convert(xml_files, json_file):
    json_dict = {"images": [], "type": "instances", "annotations": [], "categories": []}
    if PRE_DEFINE_CATEGORIES is not None:
        categories = PRE_DEFINE_CATEGORIES
    else:
        categories = get_categories(xml_files)
    bnd_id = START_BOUNDING_BOX_ID
    for xml_file in xml_files:
        tree = ET.parse(xml_file)
        root = tree.getroot()
        path = get(root, "path")
        if len(path) == 1:
            filename = os.path.basename(path[0].text)
        elif len(path) == 0:
            filename = get_and_check(root, "filename", 1).text
        else:
            raise ValueError("%d paths found in %s" % (len(path), xml_file))
        ## The filename must be a number
        image_id = get_filename_as_int(filename)
        size = get_and_check(root, "size", 1)
        width = int(get_and_check(size, "width", 1).text)
        height = int(get_and_check(size, "height", 1).text)
        image = {
            "file_name": filename,
            "height": height,
            "width": width,
            "id": image_id,
        }
        json_dict["images"].append(image)
        ## Currently we do not support segmentation.
        #  segmented = get_and_check(root, 'segmented', 1).text
        #  assert segmented == '0'
        for obj in get(root, "object"):
            category = get_and_check(obj, "name", 1).text
            if category not in categories:
                new_id = len(categories)
                categories[category] = new_id
            category_id = categories[category]
            bndbox = get_and_check(obj, "bndbox", 1)
            xmin = int(get_and_check(bndbox, "xmin", 1).text) - 1
            ymin = int(get_and_check(bndbox, "ymin", 1).text) - 1
            xmax = int(get_and_check(bndbox, "xmax", 1).text)
            ymax = int(get_and_check(bndbox, "ymax", 1).text)
            assert xmax > xmin
            assert ymax > ymin
            o_width = abs(xmax - xmin)
            o_height = abs(ymax - ymin)
            ann = {
                "area": o_width * o_height,
                "iscrowd": 0,
                "image_id": image_id,
                "bbox": [xmin, ymin, o_width, o_height],
                "category_id": category_id,
                "id": bnd_id,
                "ignore": 0,
                "segmentation": [],
            }
            json_dict["annotations"].append(ann)
            bnd_id = bnd_id + 1

    for cate, cid in categories.items():
        cat = {"supercategory": "none", "id": cid, "name": cate}
        json_dict["categories"].append(cat)

    os.makedirs(os.path.dirname(json_file), exist_ok=True)
    json_fp = open(json_file, "w")
    json_str = json.dumps(json_dict)
    json_fp.write(json_str)
    json_fp.close()


xml_val_files = glob.glob(os.path.join(xml_val, "*.xml"))
xml_test_files = glob.glob(os.path.join(xml_test, "*.xml"))
xml_train_files = glob.glob(os.path.join(xml_train, "*.xml"))

convert(xml_val_files, coco_json_annotations + 'val2017.json')
convert(xml_test_files, coco_json_annotations + 'test2017.json')
convert(xml_train_files, coco_json_annotations + 'train2017.json')

训练

训练脚本:

fruitsnuts_data.py

from detectron2.data.datasets import register_coco_instances
from detectron2.data import MetadataCatalog
import os

# 声明类别,尽量保持
CLASS_NAMES = ["defect", "normal"]
# 数据集路径
DATASET_ROOT = r'./VOCData_COCO/'
# 标注文件夹路径
ANN_ROOT = os.path.join(DATASET_ROOT, 'annotations')
# 训练图片路径
TRAIN_PATH = os.path.join(DATASET_ROOT, 'images')
# 验证图片路径
VAL_PATH = os.path.join(DATASET_ROOT, 'images')
# 测试图片路径
TEST_PATH = os.path.join(DATASET_ROOT, 'images')
# 训练集的标注文件
TRAIN_JSON = os.path.join(ANN_ROOT, 'train2017.json')
#验证集的标注文件
VAL_JSON = os.path.join(ANN_ROOT, 'val2017.json')
# 测试集的标注文件
TEST_JSON = os.path.join(ANN_ROOT, 'test2017.json')

register_coco_instances("my_train", {}, TRAIN_JSON, TRAIN_PATH)
MetadataCatalog.get("my_train").set(thing_classes=CLASS_NAMES,  # 可以选择开启,但是不能显示中文,这里需要注意,中文的话最好关闭
                                    evaluator_type='coco',  # 指定评估方式
                                    json_file=TRAIN_JSON,
                                    image_root=TRAIN_PATH)
register_coco_instances("my_val", {}, VAL_JSON, VAL_PATH)
MetadataCatalog.get("my_val").set(thing_classes=CLASS_NAMES,  # 可以选择开启,但是不能显示中文,这里需要注意,中文的话最好关闭
                                  evaluator_type='coco',  # 指定评估方式
                                  json_file=VAL_JSON,
                                  image_root=VAL_PATH)
register_coco_instances("my_test", {}, TEST_JSON, TEST_PATH)
MetadataCatalog.get("my_test").set(thing_classes=CLASS_NAMES,  # 可以选择开启,但是不能显示中文,这里需要注意,中文的话最好关闭
                                  evaluator_type='coco',  # 指定评估方式
                                  json_file=TEST_JSON,
                                  image_root=TEST_PATH)

 train.py

from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg
from detectron2.utils.logger import setup_logger
import os

setup_logger()
import fruitsnuts_data  # 导入注册文件,完成注册

if __name__ == "__main__":
    cfg = get_cfg()
    cfg.merge_from_file(
        "./configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
    )
    cfg.DATASETS.TRAIN = ("my_train",)
    cfg.DATASETS.VAL = ("my_val",) 
    cfg.DATASETS.TEST =  ("my_test",)  # 没有不用填
    cfg.DATALOADER.NUM_WORKERS = 2
    # 预训练模型文件
    # 没有可以下载
    cfg.MODEL.WEIGHTS = r"./model_final_280758.pkl"
    # 或者使用自己的预训练模型
    # cfg.MODEL.WEIGHTS = "../tools/output/model_0003191.pth"
    cfg.SOLVER.IMS_PER_BATCH =  2
    cfg.SOLVER.BASE_LR = 0.0025
    # 最大迭代次数
    cfg.SOLVER.MAX_ITER = (20000)
    cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = (128)  # faster, and good enough for this toy dataset
    cfg.MODEL.ROI_HEADS.NUM_CLASSES = 2  # 3 classes (data, fig, hazelnut)
    os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
    trainer = DefaultTrainer(cfg)
    trainer.resume_or_load(resume=False)
    trainer.train()

命令:

python utilSelf/train.py

测试脚本:

import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()

# import some common libraries
import numpy as np
import cv2
import random
import datetime
import time
import os

from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog
from detectron2.data.datasets import register_coco_instances
from detectron2.data import DatasetCatalog, MetadataCatalog
from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg
from detectron2.utils.visualizer import ColorMode

def Predict():
    register_coco_instances("custom", {}, "./VOCData_COCO/annotations/train2017.json", "./VOCData_COCO/images")
    custom_metadata = MetadataCatalog.get("custom")
    DatasetCatalog.get("custom")


    # im = cv2.imread("./VOCData_COCO/images/1.bmp")
    im = cv2.imread("./utilSelf/002.jpg")
    print(type(im))
    print(im.shape)
    cfg = get_cfg()
    cfg.merge_from_file("configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")
    cfg.DATASETS.TEST = ("custom", )
    cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
    cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.95
    cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = (
        128
    )  
    cfg.MODEL.ROI_HEADS.NUM_CLASSES = 2 
    predictor = DefaultPredictor(cfg)
    outputs = predictor(im)
    v = Visualizer(im[:, :, ::-1],
                   metadata=custom_metadata, 
                   scale=1, 
                   instance_mode=ColorMode.IMAGE_BW   # remove the colors of unsegmented pixels
    )
    #print(outputs["instances"])
    v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
    # cv2.imshow('Result',v.get_image()[:, :, ::-1])
    # cv2.waitKey()
    cv2.imwrite('Result.jpg',v.get_image()[:, :, ::-1])


if __name__ == "__main__":
    Predict()


<think>好的,用户想了解如何使用Detectron2框架训练自定义数据集。首先,我需要回忆一下Detectron2的基本流程和相关步骤。根据用户提供的引用内容,教程中提到了几个关键步骤,包括获取Docker镜像、数据可视化、配置训练、运行训练和评估性能等。 首先,我需要确定用户是否熟悉Docker,因为教程的第一步涉及Docker镜像的获取和容器的创建。可能用户需要一些基本的Docker知识,但引用里已经给出步骤,所以可能需要简要说明如何操作,但不需要深入细节,除非用户提问。 接下来是数据准备部分。Detectron2支持的数据格式通常是COCO格式,但用户可能有自己的数据集,需要注册数据集。这里需要指导用户如何将数据转换为COCO格式,或者使用其他方式注册数据集。可能需要提到数据集目录结构,比如图片和标注文件的存放位置。 然后是配置训练部分。Detectron2配置系统基于YAML文件,用户需要选择合适的模型配置,比如Faster R-CNN或Mask R-CNN,根据自己的数据集调整参数,比如类别数量、学习率等。这里需要强调修改配置的关键点,比如NUM_CLASSES和DATASETS参数。 训练和评估部分相对直接,但需要确保用户理解如何启动训练脚本,以及如何评估模型性能。可能还需要提到使用TensorBoard来监控训练过程,这对用户来说可能有帮助。 用户提供的引用中有两个链接,但第二个链接是关于汽车标志识别的,可能与血细胞检测数据集不同,但基本步骤应该是相似的。需要确保用户知道不同数据集可能需要调整的地方,比如类别数目和标注格式。 另外,用户可能遇到常见问题,比如环境配置错误、数据格式不正确或训练时的显存不足。需要提前给出一些注意事项,帮助用户避免这些问题。 最后,生成相关问题需要考虑用户可能的后续疑问,比如性能优化、数据增强、模型导出等。这些问题可以帮助用户进一步深入学习和解决问题。 总结一下,回答的结构应该按照教程中的步骤分步说明,确保每个步骤清晰,强调关键点和常见问题。同时,引用提供的资料来支持步骤的正确性,比如引用[^1]中的Docker步骤和数据集处理。</think>### 如何使用Detectron2训练自定义数据集教程 #### 1. **环境准备** - **安装Detectron2** 推荐使用Docker镜像快速搭建环境(避免依赖冲突): ```bash # 拉取Detectron2官方镜像 docker pull detectron2/detectron2:latest # 创建进入容器(挂载自定义数据集目录) docker run -it --gpus all -v /本地数据集路径:/容器内路径 detectron2/detectron2:latest ``` - **数据集格式要求** Detectron2支持COCO格式或自定义注册格式。若使用COCO格式,需包含: - `images/`目录存放图片(如JPEG) - `annotations.json`文件包含标注信息(类别、边界框等)。 #### 2. **数据集注册** - **代码示例** 在Python脚本中注册自定义数据集(非COCO格式需自定义加载逻辑): ```python from detectron2.data import DatasetCatalog, MetadataCatalog def load_custom_dataset(json_path): # 自定义解析逻辑,返回标准字典列表 ... DatasetCatalog.register("my_dataset", lambda: load_custom_dataset("annotations.json")) MetadataCatalog.get("my_dataset").set(thing_classes=["类别1", "类别2"]) ``` #### 3. **配置训练参数** - **选择预训练模型** 修改配置文件(如`configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml`): ```yaml MODEL: WEIGHTS "detectron2://ImageNetPretrained/MSRA/R-50.pkl" # 加载预训练权重 ROI_HEADS.NUM_CLASSES 2 # 修改为自定义类别数 DATASETS: TRAIN: ("my_dataset",) TEST: ("my_dataset_val",) SOLVER: BASE_LR: 0.00025 MAX_ITER: 1000 # 根据数据集大小调整迭代次数 ``` #### 4. **启动训练** - **训练脚本** 使用官方工具`train_net.py`启动训练: ```bash python tools/train_net.py \ --config-file configs/my_custom_config.yaml \ --num-gpus 1 # 根据实际GPU数量调整 ``` - **监控训练过程** 通过TensorBoard查看损失和评估指标: ```bash tensorboard --logdir output/ ``` #### 5. **模型评估与推理** - **性能验证** 运行测试集评估: ```bash python tools/train_net.py --config-file configs/my_custom_config.yaml --eval-only MODEL.WEIGHTS output/model_final.pth ``` - **单图预测** 使用训练好的模型进行推理: ```python from detectron2.engine import DefaultPredictor predictor = DefaultPredictor(cfg) outputs = predictor(im) # im为输入图像数组 ``` #### 注意事项 1. **显存不足**:减小`SOLVER.IMS_PER_BATCH`或使用更轻量模型(如RetinaNet)。 2. **数据增强**:通过`INPUT`配置项添加随机翻转、裁剪等增强策略。 3. **标注错误**:使用`detectron2.utils.visualizer.Visualizer`可视化标注,检查边界框是否合理。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值