DetNet

编程入门 行业动态 更新时间:2024-10-25 20:30:17

<a href=https://www.elefans.com/category/jswz/34/1689483.html style=DetNet"/>

DetNet

论文:DetNet: A Backbone network for Object Detection(ECCV 2018)

文章目录

    • 针对的问题
    • DetNet
        • 保持深层特征分辨率
        • 增加感受野
        • 缩减网络宽度
        • 搭建DetNet
    • 实验
    • 参考文献

针对的问题

  • 分类和检测两种任务的关注点不同,如:检测就需要更多地关注目标尺度、位置等信息,因此分类模型提取的特征比一定适用于检测
  • 通过较深的网络提取高级的语义信息对分类任务很有效,但深层特征感受野增大的同时特征图分辨率也在降低,缺乏目标的细节信息,不利于目标的定位

DetNet

主要有三点:

  • 保持深层特征的高分辨率
  • 引入dilated卷积层增加感受野
  • 缩减深层特征通道数,减少因增大分辨率带来的计算量

保持深层特征分辨率

图A中:FPN最深层特征的特征已经下采样到了1/64,这时小目标的信息已经非常模糊,通过上采样与浅层特征直接叠加虽有一定的提升,但方式仍略显粗糙

图C中:DetNet 保持深层特征为 16 × 16 16\times16 16×16大小,保留了更多小目标信息,但带来了两个问题:

  • 计算量增大
  • 感受野减小

增加感受野

怎样获得大的感受野的同时保持较高的分辨率?

  • 引入dilated convlolution,增加网络高层的感受野,同时不降低深层特征的分辨率,(大的分辨率也增加了计算量,可适当减少通道数)

下图以Res50为例:1-4阶段和传统的ResNet相同,后面改用含dalited convolution的残差结构,增加感受野的同时保持深层特征 16 × 16 16\times16 16×16的大小,

缩减网络宽度

以上图改进的Res50为例,深层特征的通道数均为256,以减少增加分辨率带来的计算量


搭建DetNet

'''导入必要模块'''
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import math
import torchvision
from torch.autograd import Variable'''
Norm = "Batch" 下面所有的正则化都是用 Batch Normalization
Norm = "Group" 下面所有的正则化都是用 Group Normalization
'''
Norm = 'Batch'  # 
# 搭建 Residual 模块
# 可以参考 torchvision.models.resnet50() 中的实现细节
# 搭建子模块
class ResidualBlock(nn.Module):# 实现子module: Resifual Blockexpansion = 4def __init__(self, inchannel, outchannel, stride=1, shortcut=None, norm=Norm):super(ResidualBlock,self).__init__()self.conv1 = nn.Conv2d(inchannel, outchannel, kernel_size=1, bias=False)self.nf_1 = self._Norm_function(norm, outchannel)self.conv2 = nn.Conv2d(outchannel,outchannel, kernel_size=3, stride=stride,padding=1, bias=False)self.nf_2 = self._Norm_function(norm, outchannel)self.conv3 = nn.Conv2d(outchannel, outchannel*4, kernel_size=1, bias=False)self.nf_3 = self._Norm_function(norm, outchannel*4)self.relu = nn.ReLU(inplace=True)self.stride = stride# 将Residual Block 看成两边组成,左边为普通卷积连接self.left = nn.Sequential(self.conv1, self.nf_1, self.relu,self.conv2, self.nf_2, self.relu,self.conv3, self.nf_3,)# 右边为 skip connection 或者叫shortcutself.right = shortcutdef forward(self, x):residual = xout = self.left(x)if self.right is not None:residual = self.right(x)out += residualout = self.relu(out)return outdef _Norm_function(self, norm, outchannel):if norm == 'Batch':     return nn.BatchNorm2d(outchannel)elif norm == 'Group':   return nn.GroupNorm(num_groups=1, num_channels=outchannel)class Dilated_BottleNeck(nn.Module):'''实现module:Dilated bottleNeck '''expansion = 4def __init__(self, inchannel, outchannel=256, stride=1, shortcut=None, norm=Norm):super(Dilated_BottleNeck, self).__init__()self.conv1 = nn.Conv2d(inchannel, outchannel, kernel_size=1, bias=False)self.nf_1 = self._Norm_function(norm, outchannel)# 空洞卷积:对应论文图A、B中绿色的小模块# 为什么stride=1,padding=2,后面会有解释self.conv2 = nn.Conv2d(outchannel, outchannel, kernel_size=3, stride=stride,padding=2, dilation=2, bias=False)self.nf_2 = self._Norm_function(norm, outchannel)self.conv3 = nn.Conv2d(outchannel, outchannel, kernel_size=1, bias=False)self.nf_3 = self._Norm_function(norm, outchannel)self.relu = nn.ReLU(inplace=True)self.stride = strideself.left = nn.Sequential(self.conv1, self.nf_1, self.relu,self.conv2, self.nf_2, self.relu,self.conv3, self.nf_3, )# ''' 先保留右边为空 '''self.right = shortcutdef forward(self,x):residual = xout = self.left(x)if self.right is not None:residual = self.right(x)out += residualout = self.relu(out)return outdef _Norm_function(self,norm, outchannel):if norm == 'Group':     return nn.GroupNorm(num_groups=1, num_channels=outchannel)elif norm == 'Batch':   return nn.BatchNorm2d(outchannel)class DetNet(nn.Module):'''实现module:DetNetDetNet59:model = DetNet([3,4,6])'''def __init__(self, layers, num_classes=2):# 初始 residual 的位置self.inchannel = 64self.det_channel = 256super(DetNet, self).__init__()# 前面 stage1 ~ stage4 与 ResNet 相同self.stage_1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),self._Norm_function(Norm, outchannel=64),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))self.stage_2 = self._make_layer(ResidualBlock, 64,  layers[0], stride=2)self.stage_3 = self._make_layer(ResidualBlock, 128, layers[1], stride=2)self.stage_4 = self._make_layer(ResidualBlock, 256, layers[2], stride=2)self.stage_5 = nn.Sequential(self._DetNetBlock_B(self.inchannel,   self.det_channel),self._DetNetBlock_A(self.det_channel, self.det_channel),self._DetNetBlock_A(self.det_channel, self.det_channel),)self.stage_6 = nn.Sequential(self._DetNetBlock_B(self.det_channel, self.det_channel),self._DetNetBlock_A(self.det_channel, self.det_channel),self._DetNetBlock_A(self.det_channel, self.det_channel),)self.avgpool = nn.AvgPool2d(7,stride=1)self.fc = nn.Linear(self.det_channel, num_classes)def forward(self, x):x = self.stage_1(x)x = self.stage_2(x)x = self.stage_3(x)x = self.stage_4(x)x = self.stage_5(x)x = self.stage_6(x)x = self.avgpool(x)x = x.view(x.size(0), -1)x = self.fc(x)return x# 构建图A模块def _DetNetBlock_A(self, inchannel, outchannel ):return Dilated_BottleNeck(inchannel, outchannel)# 构建图B模块def _DetNetBlock_B(self, inchannel, outchannel):shortcut = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=1, stride=1, padding=0, bias=False),self._Norm_function(Norm, outchannel))layers = []layers.append(Dilated_BottleNeck(inchannel, outchannel, stride=1, shortcut=shortcut))return nn.Sequential(*layers)def _make_layer(self, block, outchannel, block_num, stride=1):shortcut = Noneif stride !=1 or self.inchannel != outchannel * block.expansion:# 构建残差 1X1 卷积shortcut = nn.Sequential(nn.Conv2d(self.inchannel, outchannel* block.expansion,kernel_size=1, stride=stride,bias=False),self._Norm_function(Norm, outchannel* block.expansion))layers = []layers.append(block(self.inchannel, outchannel, stride, shortcut))# 更新residual的位置self.inchannel = outchannel * block.expansionfor i in range(1, block_num):layers.append(block(self.inchannel, outchannel))return nn.Sequential(*layers)def _Norm_function(self,norm, outchannel):if norm == 'Group':     return nn.GroupNorm(num_groups=1, num_channels=outchannel)elif norm == 'Batch':   return nn.BatchNorm2d(outchannel)class ResNet(nn.Module):'''实现module:ResNet用子module实现residual block,用_make_layer函数实现layerResNet50:  model = ResNet(ResidualBlock, [3, 4, 6, 3])ReaNet101: model = ResNet(ResidualBlock, [3, 4, 23, 3])'''def __init__(self, block, layers, num_classes=1000):self.inchannel = 64super(ResNet, self).__init__()self.Conv1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),self._Norm_function(Norm, outchannel=64),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))self.Conv2_x = self._make_layer(block, 64, layers[0])self.Conv3_x = self._make_layer(block, 128, layers[1], stride=2)self.Conv4_x = self._make_layer(block, 256, layers[2], stride=2)self.Conv5_x = self._make_layer(block, 512, layers[3], stride=2)self.avgpool = nn.AvgPool2d(7, stride=1)self.fc = nn.Linear(512 * block.expansion , num_classes)for m in self.modules():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()def forward(self, x):x = self.Conv1(x)x = self.Conv2_x(x)x = self.Conv3_x(x)x = self.Conv4_x(x)x = self.Conv5_x(x)x = self.avgpool(x)x = x.view(x.size(0),-1)x = self.fc(x)return xdef _make_layer(self, block, outchannel, block_num, stride=1):shortcut = Noneif stride !=1 or self.inchannel != outchannel * block.expansion:shortcut = nn.Sequential(nn.Conv2d(self.inchannel, outchannel* block.expansion,kernel_size=1, stride=stride, bias=False),self._Norm_function(Norm, outchannel * block.expansion))layers = []layers.append(block(self.inchannel, outchannel, stride, shortcut))self.inchannel = outchannel * block.expansionfor i in range(1, block_num):layers.append(block(self.inchannel, outchannel))return nn.Sequential(*layers)def _Norm_function(self, norm, outchannel):if norm == 'Batch':     return nn.BatchNorm2d(outchannel)elif norm == 'Group':   return nn.GroupNorm(num_groups=1, num_channels=outchannel)if __name__ == '__main__':x = Variable(torch.randn(1, 3, 224, 224))# model = ResNet(ResidualBlock, [3, 4, 6, 3])# model = torchvision.models.resnet50()model = DetNet([3,4,6])y = model(x)print(model)>>>
DetNet((stage_1): Sequential((0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False))(stage_2): Sequential((0): ResidualBlock((conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(64, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): ResidualBlock((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): ResidualBlock((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(stage_3): Sequential((0): ResidualBlock((conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): ResidualBlock((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): ResidualBlock((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): ResidualBlock((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(stage_4): Sequential((0): ResidualBlock((conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): ResidualBlock((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): ResidualBlock((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): ResidualBlock((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(4): ResidualBlock((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(5): ResidualBlock((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(stage_5): Sequential((0): Sequential((0): Dilated_BottleNeck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(1): Dilated_BottleNeck((conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Dilated_BottleNeck((conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(stage_6): Sequential((0): Sequential((0): Dilated_BottleNeck((conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(1): Dilated_BottleNeck((conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Dilated_BottleNeck((conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(nf_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(nf_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(left): Sequential((0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)(fc): Linear(in_features=256, out_features=2, bias=True)
)

实验


下表可看出:IoU阈值越高,大目标的检测效果越明显,说明DetNet深层特征中的位置信息保持得比较好,IoU阈值越高越能体现位置回归的效果,阈值较低时则体现分类效果

用召回率可以看到DetNet对小目标明显的提升




参考文献

【1】DetNet 算法笔记
【2】阅读与复现DetNet
【3】目标检测论文阅读:DetNet
【4】DetNet 实现及可视化框架–DetNet59
【5】detnet详细解读(网络主干分类任务部分)

更多推荐

DetNet

本文发布于:2023-06-26 04:09:35,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/889074.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:DetNet

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!