Nuwan Attygalle, previously from HICUP Lab and currently employed at UC Louvain, recently presented a research paper titled “Text-to-Image Generation for Vocabulary Learning Using the Keyword Method” at the ACM IUI Conference. The ACM IUI (Intelligent User Interfaces) conference is an annual premier event focusing on Artificial Intelligence (AI) and Human-Computer Interaction (HCI).

The study explores the use of generative AI-powered text-to-image models to create associations between a foreign language word and its meaning. This innovative approach aims to facilitate vocabulary learning by leveraging AI-generated imagery to reinforce memory retention.



This research was conducted in collaboration with researchers from the University of Luxembourg, CSIRO (Australia’s national AI and data science research organization), Coburg University, University of St Andrews, and the Nara Institute of Science and Technology.
Publication:
Attygalle, N.T., Kljun, M., Quigley, A., Čopič Pucihar, K., Grubert, J., Biener, V., Leiva, L.A., Yoneyama, J., Toniolo, A., Miguel, A., Kato, H. and Weerasinghe, M., 2025, March. Text-to-Image Generation for Vocabulary Learning Using the Keyword Method. In Proceedings of the 30th International Conference on Intelligent User Interfaces (pp. 1381-1397).
The paper can be accessed here:
The ‘keyword method’ is an effective technique for learning vocabulary of a foreign language. It involves creating a memorable visual link between what a word means and what its pronunciation in a foreign language sounds like in the learner’s native language. However, these memorable visual links remain implicit in the people’s mind and are not easy to remember for a large number of words. To enhance the memorisation and recall of the vocabulary, we developed an application that combines the keyword method with text-to-image generators to externalise the memorable visual links into visuals. These visuals represent additional stimuli during the memorisation process. To explore the effectiveness of this approach we first run a pilot study to investigate how difficult it is to externalise the descriptions of mental visualisations of memorable links, by asking participants to write them down. We used these descriptions as prompts for text-to-image generator (DALL-E 2) to convert them into images and asked participants to select their favourites. Next, we compared different text-to-image generators (DALL-E 2, Midjourney, Stable and Latent Diffusion) to evaluate the perceived quality of the generated images by each. Despite heterogeneous results, participants mostly preferred images generated by DALL-E 2, which was used also for the final study. In this study, we investigated whether providing such images enhances the retention of vocabulary being learned, compared to the keyword method alone. Our results indicate that people did not encounter difficulties describing their visualisations of memorable links and that providing corresponding images significantly increases memory retention.
Follow HICUP Lab in its official channels for more updates:
X (formerly Twitter) account: https://x.com/HicupLab
Website: https://hicup.famnit.upr.si/
For feedback regarding our media content, you may contact our Social media and web manager.