读u2net代码

编程入门 行业动态 更新时间:2024-10-24 08:28:11

读u2net<a href=https://www.elefans.com/category/jswz/34/1771412.html style=代码"/>

读u2net代码

我这里经过上游的检测跟踪得到roi图像是128643的ndarray
transform后是torch.Size([1, 3, 320, 320])-》x

U2NET(hx=x(stage1): RSU7(输入hx,并再复制一个局部的hx(rebnconvin): REBNCONV(输入上面那个局部hx[1, 32, 320, 320]再复制一个hx(conv_s1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hxin[1, 64, 320, 320](rebnconv1): REBNCONV(输入hxin(conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx1[1, 32, 320, 320](pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)hx1经过最大池化得到新的hx[1, 32, 160, 160](rebnconv2): REBNCONV(输入这个新的hx(conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx2维度不变(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)池化hx2后得到hx[1, 32, 80, 80](rebnconv3): REBNCONV(输入hx(conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx3维度不变(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)池化hx3有hx[1, 32, 40, 40](rebnconv4): REBNCONV(输入hx(conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx4维度不变(pool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)得到hx[1, 32, 20, 20](rebnconv5): REBNCONV(输入hx(conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)得到hx[1, 32, 10, 10](rebnconv6): REBNCONV(输入hx(conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx6维度不变(rebnconv7): REBNCONV(输入hx6(conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx7[1, 32, 10, 10](rebnconv6d): REBNCONV(输入hx6和hx7的拼接-》[1, 64, 10, 10](conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx6d[1, 32, 10, 10]对hx6d上采样到hx5的大小得到hx6dup[1, 32, 20, 20](rebnconv5d): REBNCONV(输入hx6dup,hx5的拼接(conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到的hx5d上采样到hx5dup[1, 32, 40, 40](rebnconv4d): REBNCONV(输入hx5dup,hx4的拼接(conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx4d上采样到hx4dup(rebnconv3d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV(输入hx2dup,hx1的拼接(conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))得到hx1d[1, 64, 320, 320]后与hxin求和得到hx1)(pool12): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)对hx1池化得到hx[1, 64, 160, 160](stage2): RSU6(输入hx(rebnconvin): REBNCONV((conv_s1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv2): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv3): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv4): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv5): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv6): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv5d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))类似地得到hx2[1, 128, 160, 160](pool23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)池化后是hx[1, 128, 80, 80](stage3): RSU5((rebnconvin): REBNCONV((conv_s1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv2): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv3): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv4): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv5): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4d): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx3[1, 256, 80, 80](pool34): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)池化得到hx[1, 256, 40, 40](stage4): RSU4((rebnconvin): REBNCONV((conv_s1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv2): REBNCONV((conv_s1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv3): REBNCONV((conv_s1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4): REBNCONV((conv_s1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx4[1, 512, 40, 40](pool45): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)得到hx[1, 512, 20, 20](stage5): RSU4F((rebnconvin): REBNCONV((conv_s1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx5[1, 512, 20, 20](pool56): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)得到[1, 512, 10, 10](stage6): RSU4F((rebnconvin): REBNCONV((conv_s1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx6[1, 512, 10, 10]上采样到hx6up[1, 512, 20, 20](stage5d): RSU4F(输入hx6up,hx5的拼接(rebnconvin): REBNCONV((conv_s1): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx5d[1, 512, 20, 20]上采样到[1, 512, 40, 40](stage4d): RSU4(拼接hx5dup,hx4(rebnconvin): REBNCONV((conv_s1): Conv2d(1024, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv2): REBNCONV((conv_s1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv3): REBNCONV((conv_s1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4): REBNCONV((conv_s1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx4d[1, 256, 40, 40]上采样到hx4dup[1, 256, 80, 80](stage3d): RSU5(拼接hx4dup,hx3(rebnconvin): REBNCONV((conv_s1): Conv2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv2): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv3): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv4): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv5): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4d): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx3d[1, 128, 80, 80]上采样到[1, 128, 160, 160](stage2d): RSU6(拼接hx3dup,hx2(rebnconvin): REBNCONV((conv_s1): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv2): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv3): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv4): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv5): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv6): REBNCONV((conv_s1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv5d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx2d[1, 64, 160, 160]上采样到hx2dup[1, 64, 320, 320](stage1d): RSU7(拼接hx2dup,hx1(rebnconvin): REBNCONV((conv_s1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1): REBNCONV((conv_s1): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv2): REBNCONV((conv_s1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv3): REBNCONV((conv_s1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv4): REBNCONV((conv_s1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv5): REBNCONV((conv_s1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(pool5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)(rebnconv6): REBNCONV((conv_s1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv7): REBNCONV((conv_s1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv6d): REBNCONV((conv_s1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv5d): REBNCONV((conv_s1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv4d): REBNCONV((conv_s1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv3d): REBNCONV((conv_s1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv2d): REBNCONV((conv_s1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True))(rebnconv1d): REBNCONV((conv_s1): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(bn_s1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu_s1): ReLU(inplace=True)))得到hx1d[1, 64, 320, 320](side1): Conv2d(64, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))输入hx1d得到d1[1, 1, 320, 320](side2): Conv2d(64, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))输入hx2d得到d2[1, 1, 160, 160]再上采样到[1, 1, 320, 320](side3): Conv2d(128, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))输入hx3d得到d3[1, 1, 80, 80]再上采样到[1, 1, 320, 320](side4): Conv2d(256, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))输入hx4d得到d3[1, 1, 40,40]再上采样到[1, 1, 320, 320](side5): Conv2d(512, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))输入hx5d得到d3[1, 1, 20, 20]再上采样到[1, 1, 320, 320](side6): Conv2d(512, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))输入hx6d得到d3[1, 1, 10,10]再上采样到[1, 1, 320, 320]把d1到d6拼接起来得到[1, 6, 320, 320](outconv): Conv2d(6, 1, kernel_size=(1, 1), stride=(1, 1))
)得到d0[1, 1, 320, 320]
从而输出d0到d6的sigmoid
一般d0作为预测结果,另外的6个输出也计算损失

更多推荐

读u2net代码

本文发布于:2023-12-03 10:21:57,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1654266.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:代码   u2net

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!