
Keywords
active object recognition, recurrent neural network, next-best-view, 3D attention
Abstract
Active vision is inherently attention-driven: an agent actively selects views to attend in order to rapidly perform a vision task while improving its internal representation of the scene being observed. Inspired by the recent success of attention-based models in 2D vision tasks based on single RGB images, we address multi-view depth-based active object recognition using an attention mechanism, by use of an end-to-end recurrent 3D attentional network. The architecture takes advantage of a recurrent neural network to store and update an internal representation. Our model, trained with 3D shape datasets, is able to iteratively attend the best views targeting an object of interest for recognizing it. To realize 3D view selection, we derive a 3D spatial transformer network. It is differentiable, allowing training with backpropagation, and so achiev-ing much faster convergence than the reinforcement learning employed by most existing attention-based models. Experiments show that our method, with only depth input, achieves state-of-the-art next-best-view performance both in terms of time taken and recognition accuracy.
Publisher
Tsinghua University Press
Recommended Citation
Min Liu, Yifei Shi, Lintao Zheng et al. Recurrent 3D attentional networks for end-to-end active object recognition. Computational Visual Media 2019, 5(1): 91-104.
Included in
Computational Engineering Commons, Computer-Aided Engineering and Design Commons, Graphics and Human Computer Interfaces Commons, Software Engineering Commons