
Keywords
object detection, lane boundary detection, autonomous driving, deep learning
Abstract
In this paper, we propose a simple but effective framework for lane boundary detection, called SpinNet. Considering that cars or pedestrians often occlude lane boundaries and that the local features of lane boundaries are not distinctive, therefore, analyzing and collecting global context information is crucial for lane boundary detection. To this end, we design a novel spinning convolution layer and a brand-new lane parameterization branch in our network to detect lane boundaries from a global perspective. To extract features in narrow strip-shaped fields, we adopt strip-shaped convolutions with kernels which have 1×n or n×1 shape in the spinning convolution layer. To tackle the problem of that straight strip-shaped convolutions are only able to extract features in vertical or horizontal directions, we introduce the concept of feature map rotation to allow the convolutions to be applied in multiple directions so that more information can be collected concerning a whole lane boundary. Moreover, unlike most existing lane boundary detectors, which extract lane boundaries from segmentation masks, our lane boundary parameterization branch predicts a curve expression for the lane boundary for each pixel in the output feature map. And the network utilizes this information to predict the weights of the curve, to better form the final lane boundaries. Our framework is easy to implement and end-to-end trainable. Experiments show that our proposed SpinNet outperforms state-of-the-art methods.
Publisher
Tsinghua University Press
Recommended Citation
Ruochen Fan, Xuanrun Wang, Qibin Hou et al. SpinNet: Spinning convolutional network for lane boundary detection. Computational Visual Media 2019, 05(04): 417-428.
Included in
Computational Engineering Commons, Computer-Aided Engineering and Design Commons, Graphics and Human Computer Interfaces Commons, Software Engineering Commons