Self-supervised learning from web data for multimodal retrieval


Abstract:

Self-supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human-annotated data. Web and social media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learned in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learned joint image and text embedding space. We perform a thorough analysis and performance comparison of five different state-of-the-art text embeddings in three different benchmarks. We show that the embeddings learned with web and social media data have competitive performances over supervised methods in the text-based image retrieval task, and we clearly outperform the state of the art in the MIRFlickr dataset when training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learned embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed of Instagram images and their associated texts, which can be used for fair comparison of image-text embeddings.

Año de publicación:

2019

Keywords:

  • Multimodal retrieval
  • Webly supervised learning
  • Multimodal embedding
  • Text embeddings
  • Self-supervised learning

Fuente:

scopusscopus

Tipo de documento:

Book Part

Estado:

Acceso restringido

Áreas de conocimiento:

  • Aprendizaje automático
  • Ciencias de la computación

Áreas temáticas:

  • Programación informática, programas, datos, seguridad