scene text removal, text stroke detection, generative adversarial networks, cascaded network design, real-world dataset
Recent learning-based approaches show promising performance improvement for the scene text removal task but usually leave several remnants of text and provide visually unpleasant results. In this work, a novel end-to-end framework is proposed based on accurate text stroke detection. Specifically, the text removal problem is decoupled into text stroke detection and stroke removal; we design separate networks to solve these two subproblems, the latter being a generative network. These two networks are combined as a processing unit, which is cascaded to obtain our final model for text removal. Experimental results demonstrate that the proposed method substantially outperforms the state-of-the-art for locating and erasing scene text. A new large-scale real-world dataset with 12,120 images has been constructed and is being made available to facilitate research, as current publicly available datasets are mainly synthetic so cannot properly measure the performance of different methods.
Bian, Xuewei; Wang, Chaoqun; Quan, Weize; Ye, Juntao; Zhang, Xiaopeng; and Yan, Dong-Ming
"Scene text removal via cascaded text stroke detection and erasing,"
Computational Visual Media: Vol. 8:
2, Article 7.
Available at: https://dc.tsinghuajournals.com/computational-visual-media/vol8/iss2/7