attention mechanism, scene understanding, relational reasoning, 3D indoor object detection
Relation contexts have been proved to be useful for many challenging vision tasks. In the field of 3D object detection, previous methods have been taking the advantage of context encoding, graph embedding, orexplicit relation reasoning to extract relation contexts. However, there exist inevitably redundant relation contexts due to noisy or low-quality proposals. In fact, invalid relation contexts usually indicate underlying scene misunderstanding and ambiguity, which may, on the contrary, reduce the performance in complex scenes. Inspired by recent attention mechanism like Transformer, we propose a novel 3D attention-based relation module (ARM3D). It encompasses object-aware relation reasoning to extract pair-wise relation contexts among qualified proposals and an attention module to distribute attention weights towards different relation contexts. In this way, ARM3D can take full advantage of the useful relation contexts and filter those less relevant or even confusing contexts, which mitigates the ambiguity in detection. We have evaluated the effectiveness of ARM3D by plugging it into several state-of-the-art 3D object detectors and showing more accurate and robust detection results. Extensive experiments show the capability and generalization of ARM3D on 3D object detection. Our source code is available at https://github.com/lanlan96/ARM3D.
Lan, Yuqing; Duan, Yao; Liu, Chenyi; Zhu, Chenyang; Xiong, Yueshan; Huang, Hui; and Xu, Kai
"ARM3D: Attention-based relation module for indoor 3D object detection,"
Computational Visual Media: Vol. 8:
3, Article 4.
Available at: https://dc.tsinghuajournals.com/computational-visual-media/vol8/iss3/4