语义分割】CGNet"/>
【语义分割】CGNet
Paper:CGNet
Github:Code
文章目录
- Context Guided Block
- Context Guided Network
- Experiment
- 1、Ablation Studies
- 2、Comparison with state-of-the-arts
without any post-processing and multi-scale testing, the proposed CGNet achieves
64.8% mean IoU on Cityscapes
with less than
0.5 M parameters
Context Guided Block
Firstly, CG block learns thejoint feature of both local feature and surrounding context
. Thus, CG block learns the representation of each object from both itself and its spatially related objects(对象本身及其空间相关对象)
, which contains rich co-occurrence relationship.
Secondly, CG block employs the global context to improve the joint feature
. The global context is applied to channel-wisely re-weight the joint feature, so as to emphasize useful components and suppress useless ones.
CG block adopts channel-wise convolutions
(depth-wise conv)
之前很多工作为减少计算量,使用深度可分离卷积代替普通卷积。与之不同CG block仅使用depthwise conv,去掉pointwise conv。
This design is not suitable for the proposed CG block, since the
local feature and the surrounding context in CG block need to maintain channel independence
.
Intuitively, GRL
has a stronger
capability than LRL to promote the flow of information in the network.
CG block
Context Guided Network
CGNet follows the major principle of “deep and thin
” to save memory footprint as much as possible.
CG block is utilized in all stages of CGNet. Thus, CGNet captures contextual information in all stages
of the network and is specially tailored for increasing segmentation accuracy.
Current mainstream segmentation networks have five down-sampling stages
which learn too abstract features of objects and missing lots of the discriminative spatial information, causing over-smoothed segmentation boundaries. Differently, CGNet has only three downsampling stages,
which is helpful for preserving spatial information.
Experiment
1、Ablation Studies
2、Comparison with state-of-the-arts
更多推荐
【语义分割】CGNet
发布评论