AnchorNet: A Weakly Supervised Network to Learn Geometry-sensitive Features For Semantic Matching

저자
David Novotny, Diane Larlus, Andrea Vedaldi
인용
Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, USA, 21 - 26 July 2017
초록

Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the deformable spatial pyramid or the proposal flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class.

발행년도
2017
파일 다운로드
AnchorNet-A weakly supervised network to learn geometry-sensitive features for semantic matching.pdf (4.88MB)