admin管理员组

文章数量:1565255

Learning a perspective-embedded deconvolution network for crowd counting
没有找到代码

本文在人群密度估计这个问题上的创新点: fuse the perspective into a deconvolution network

首先看看 Perspective
Perspective is an inherent property of most surveillance scenes

所谓的 Perspective 就是同一个尺寸的物体,在图像中位置的不同其在图像中的尺寸也是不一样的。距离相机越远其尺寸越小,距离相机越近其尺寸越大。在人群图像中的表现就是离相机远的人其在图像中就显得比较小,离相机比较近的人其在图像中显得比较大。
Perspective distortions need to be compensated in regression-based crowd counting methods

真值密度图的生成还是 人头位置的 Gaussian kernels 的求和,使用 perspective maps 来矫正 perspective distortion,主要根据这个 perspective maps 来设置
Gaussian kernels 中参数
the ground truth density map is defined as a summation of all the Gaussian kernels centering at each center of the objects.
Due to the varying sizes of pedestrians caused by perspective distortion, it is necessary to incorporate specific scene geometric information to cover the size variations

下面接着来看这个 deconvolution network

网络的输入是 RGB images and the perspective maps
L2 loss between the estimated and ground truth density maps is used to train our netowrk:

4.2. Baseline model: the counting FCN
基于语义分割框架 FCN的 baseline model (CFCN): the CFCN network constitutes layers from conv1 to conv4, with filter sizes of 32 7×7×3, 32 7×7×32, 64 5×5×32 for the first three layers.

4.3. Deconvolution network
CFCN-DCN:加了两个卷积层 conv5 with filter size 5 × 5 and conv6 with filter size 7 × 7 are learnable kernels for precisely dense output
a full-resolution output map

4.4. Perspective fusion
the perspective-embedded deconvolution network (PE-CFCN-DCN)
这里看 图2 比较直接明了
A perspective map pyramid is constructed at different resolutions according to the network. Then fusion layer is implemented by direct concatenation of the feature maps from the RGB input and the correspondingly-sized perspective map. Each fusion layer is inserted before each deconvolution block for guided interpolation.


the labeled perspective map 这个怎么得到了?

本文标签: 密度人群perspectiveLearningEmbedded