Date: Wednesday, 22.05.2024 15:20-17:00 CET
Location: Building S1|15 Room 133
In case you are interested in a 1-on-1 meeting or meal with the speaker, please contact the coordinator Angela Yu: angela.yu@tu-…
Abstract:
Infants gradually learn to recognize and categorize objects, a process that is influenced by language. This talk explores how caregivers' naming of objects, even if inconsistent and unclear, can enhance a child's visual understanding. Using a computer model and a synthetic set of images seen by a toddler-like agent during play, we study how matching images and words over time improves category recognition. Our findings show that small changes in how often objects are named can significantly affect learning, highlighting the importance of aligning visual and language inputs.
We also discuss how humans learn relationships between objects. Using a bio-inspired neural network model, we simulate visual experiences to see how objects are grouped based on context, like kitchen or bedroom scenes. Our results reveal that higher network layers group objects by context, while lower layers focus on object identity. This dual approach of matching visuals with words and timing helps explain how we develop semantic knowledge.
Overall, this talk suggests a computational models to explore the role of language and context in shaping visual and semantic learning in early development.
Related manuscript:
Schaumlöffel, T., Aubret, A., Roig, G., & Triesch, J. (2023). Caregiver Talk Shapes Toddler Vision: A Computational Study of Dyadic Play. In 2023 IEEE International Conference on Development and Learning (ICDL). IEEE. https://arxiv.org/abs/2312.04118