Show, Edit and Tell: A Framework for Editing Image Captions

Citation

Sammani, Fawaz and Luke, Melas Kyriazi (2020) Show, Edit and Tell: A Framework for Editing Image Captions. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 4807-4815. ISSN 1063-6919

[img] Text
92.pdf - Published Version
Restricted to Repository staff only

Download (2MB)

Abstract

Most image captioning frameworks generate captions directly from images, learning a mapping from visual features to natural language. However, editing existing captions can be easier than generating new ones from scratch. Intuitively, when editing captions, a model is not required to learn information that is already present in the caption (i.e. sentence structure), enabling it to focus on fixing details (e.g. replacing repetitive words). This paper proposes a novel approach to image captioning based on iterative adaptive refinement of an existing caption. Specifically, our caption-editing model consisting of two sub-modules: (1) EditNet, a language module with an adaptive copy mechanism (Copy-LSTM) and a Selective Copy Memory Attention mechanism (SCMA), and (2) DCNet, an LSTM-based denoising auto-encoder. These components enable our model to directly copy from and modify existing captions. Experiments demonstrate that our new approach achieves state-of-art performance on the MS COCO dataset both with and without sequence-level training.

Item Type: Article
Uncontrolled Keywords: Image compression
Subjects: T Technology > TA Engineering (General). Civil engineering (General) > TA1501-1820 Applied optics. Photonics
Divisions: Faculty of Engineering and Technology (FET)
Depositing User: Ms Suzilawati Abu Samah
Date Deposited: 26 Oct 2021 03:02
Last Modified: 26 Oct 2021 03:02
URII: http://shdl.mmu.edu.my/id/eprint/8357

Downloads

Downloads per month over past year

View ItemEdit (login required)