Multi-Modal Representations for Improved Bilingual Lexicon Learning

I Vulić, D Kiela, Stephen Clark & MF Moens
Recent work has revealed the potential of using visual representations for bilingual lexicon learning (BLL). Such image-based BLL methods, however, still fall short of linguistic approaches. In this paper, we propose a simple yet effective multimodal approach that learns bilingual semantic representations that fuse linguistic and visual input. These new bilingual multi-modal embeddings display significant performance gains in the BLL task for three language pairs on two benchmarking test sets, outperforming linguistic-only BLL models using...
This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.