Citation
Samani, Fawaz and Elsayed, Mahmoud (2020) Look and modify: Modification networks for image captioning. Computer Vision and Pattern Recognition. pp. 1-12.
Text
76.pdf Restricted to Repository staff only Download (10MB) |
Abstract
Attention-based neural encoder-decoder frameworks have been widely used for image captioning. Many of these frameworks deploy their full focus on generating the caption from scratch by relying solely on the image features or the object detection regional features. In this paper, we introduce a novel framework that learns to modify existing captions from a given framework by modeling the residual information, where at each timestep the model learns what to keep, remove or add to the existing caption allowing the model to fully focus on "what to modify" rather than on "what to predict". We evaluate our method on the COCO dataset, trained on top of several image captioning frameworks and show that our model successfully modifies captions yielding better ones with better evaluation scores.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Image authentication |
Subjects: | T Technology > TA Engineering (General). Civil engineering (General) > TA1501-1820 Applied optics. Photonics |
Divisions: | Faculty of Engineering and Technology (FET) |
Depositing User: | Ms Rosnani Abd Wahab |
Date Deposited: | 16 Dec 2020 07:54 |
Last Modified: | 16 Dec 2020 07:54 |
URII: | http://shdl.mmu.edu.my/id/eprint/7874 |
Downloads
Downloads per month over past year
Edit (login required) |