Abstract
Referring expression is a kind of language expression that used for referring to particular objects. To make the expression without ambiguation, people often use attributes to describe the particular object. In this paper, we explore the role of attributes by incorporating them into both referring expression generation and comprehension. We first train an attribute learning model from visual objects and their paired descriptions. Then in the generation task, we take the learned attributes as the input into the generation model, thus the expressions are generated driven by both attributes and the previous words. For comprehension, we embed the learned attributes with visual features and semantics into the common space model, then the target object is retrieved based on its ranking distance in the common space. Experimental results on the three standard datasets, RefCOCO, RefCOCO+, and RefCOCOg show significant improvements over the baseline model, demonstrating that our method is effective for both tasks.
Original language | English |
---|---|
Title of host publication | Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 4866-4874 |
Number of pages | 9 |
ISBN (Electronic) | 9781538610329 |
DOIs | |
Publication status | Published - 2017 Dec 22 |
Event | 16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy Duration: 2017 Oct 22 → 2017 Oct 29 |
Publication series
Name | Proceedings of the IEEE International Conference on Computer Vision |
---|---|
Volume | 2017-October |
ISSN (Print) | 1550-5499 |
Other
Other | 16th IEEE International Conference on Computer Vision, ICCV 2017 |
---|---|
Country/Territory | Italy |
City | Venice |
Period | 17/10/22 → 17/10/29 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition