Natalie Sebanz is a Professor in Cognitive Science at Central European University. Her research interests revolve around the cognitive and neural basis of social interaction, with a special focus on how we coordinate our actions with others. Currently her main interest is how we learn by participating in joint actions (funded by an ERC Proof of Concept grant). Having obtained her PhD at the Max Planck Institute for Psychological Research, Munich, she has held appointments at Rutgers University (US), the University of Birmingham (UK), and Radboud University (NL). Natalie is a recipient of the European Science Foundation`s Young Investigator Award and the Young Mind and Brain Prize.
How t(w)o act together?
Our ability to perform joint actions has sparked much debate about the nature of shared intentions. From a cognitive neuroscience point of view, joint action also raises questions about the perceptual, cognitive, neuronal, and motor processes that make it possible for us to engage in and succeed at coordinating our actions with others. In this talk, I will review experimental studies on dyadic joint actions showing that 1. people have a tendency to integrate their task partner's actions in their immediate planning, 2. task partners form action plans that specify joint outcomes, 3. when deciding on a course of action, task partners consider the joint costs of their own and others' actions, and 4. to facilitate coordination, task partners invest effort in making themselves predictable. I will discuss implications of these findings for our understanding of action planning, partner choice, rapport, and commitment.
Dirk Wulff is a Senior Research Scientist at the Max Planck Institute for Human Development, where he leads the Search and Learning research area within the Center for Adaptive Rationality. He also is a Senior Adjunct Researcher at the Center for Cognitive and Decision Science at the University of Basel. His work is concerned with the cognitive underpinnings of real-world behavior, focussing on the interaction between search, learning, mental representations, and the information environment. His work draws on various methodological approaches, raning from from behavioral experiments to large language models. In addition to his academic work, he is active in data science education (https://therbootcamp.github.io/) and philanthropy (http://correlaid.ch/).
Mapping semantic representations across the lifespan
Throughout life, people make large amounts of idiosyncratic experiences. These experiences should shape people's mental representations of the world and, in turn, thought and behavior. In this presentation, I will present our efforts to measure mental representations at the level of the individual in the form of semantic networks and link these networks to cognitive performance across various tasks. By understanding individual differences in people’s semantic representations and performance, we hope to advance our understanding of cognitive aging and prepare improved diagnostics of age-related pathologies.
Esra Mungan received her undergraduate degree in Psychology from Boğaziçi University in 1990. She undertook her studies at American University (Washington, D.C.) where she was admitted with a Dean’s Honor dual scholarship to the Ph.D. program in Experimental Psychology. During her PhD work, Mungan was invited by the Boğaziçi University Psychology Department to teach an obligatory 3rd-year course (PSY 301-302 Research Methods I-II) as a part-time faculty member from 1994 to 1996. After that she quit the PhD program at American University and started to work full-time elsewhere. She returned to Boğaziçi University as a full- time instructor in 2002 and re-entered the PhD program in Behavior, Cognition and Neuroscience at American University in 2005, where she received her PhD degree in January 2007. Mungan served as Vice Chair of the department between 2002-2005. Since 2005 she is the coordinator of the Departmental Erasmus and Exchange Program. Mungan’s publications and conference presentation have been on forgetting mechanisms in long-term episodic memory, implicit processes, qualitative aspects of memory and memory awareness, effects of encoding processes in music memory. More recently her interests turned to music cognition with special focus on music segmentation, Gestalt principles in music perception, rhythm perception and interfaces of language, poetry, and music. Among her other interests are conceptualizations of evolutionary theory as well as cognitive science related subjects.
Gestalt Theory: A Revolution Put on Pause?
Prospects for a Paradigm Shift in the Psychological Sciences
In my talk I will discuss whether there are chances for a paradigm shift in the psychological sciences, i.e., a shift away from the (almost without much thought) taken-for-granted sequential, from piece-to-whole mainstream understanding of the now almost 70-year old information processing perspective to a perspective that takes as its starting point the whole, and hence meaning. The whole may stand for an object as embedded in its immediately salient as well as inconspicuous environment, where parts cannot be made sense of without knowing and understanding their roles in the larger configurations. It may stand for an organism or an organismic collectivity as embedded in its environment, a person or a collectivity of people in their embedded immediate and phenomenal field, who we will also not understand unless we understand their larger environmental, societal and cultural embeddedness.
Lev Vygotsky is said to have gone into a crisis in his journey of meaning-making once he saw that the complexity of the human could not be understood within a simple, linear, mechanistic perspective. Impressively, this led him to courageously change his entire conceptualization only a few years before his early death in an ideologically divided world. Alexander Luria was well aware that human cognition could never be understood without understanding cultural mediation, historical development, and praxis, and that all these layers are in such an intricate, complex interrelation that we cannot simply split or slice them for isolated analyses.
In today’s scientific climate where the mainstream information-processing perspective is serving as an unchallenged, often hidden assumption within neuroscience and computer science, the two popular sciences that are casting their almost suffocating shadows on psychology, I will look for recent, promising developments which might be paving the road to a perspective so long ago proposed by the Gestaltists yet somehow “lost in translation”. In that sense, Gestalt theory, which to this day has been widely misrepresented as a theory that talks about ‘a bunch of grouping principles in static vision’, is the very first dynamical theory within the psychological sciences that meticulously proposed an uncompromising necessity of looking at complex embeddedness of the whole. And this it did with a firm philosophical grounding (something that is ever more missing within theory building in psychology) that demanded the inclusion of the first-person perspective.
Albert Gatt is a full professor at the Natural Language Processing group of the Dept. of Information and Computing Sciences at Utrecht University. His research mostly focuses on the automatic generation of language from non-linguistic information (a.k.a. Natural Language Generation). One important aspect of this is how systems -- artificial or human -- learn meaningful relationships between language and the non-linguistic, especially the perceptual, world. Albert combines machine learning methods such as neural networks, and experimental psycholinguistic methods in his research. Some of the topics he explores are data-to-text generation, the vision-language interface, the production and generation of referring expressions, and the evaluation of NLP models. He has also worked quite a bit on NLP for Maltese, and introduced corpora and BERT models for Maltese.
How do people describe what they see, and how can we get generative models to emulate this capacity? Vision-Language models are neural networks trained on pairings of image and text data. One of the generative tasks they can be trained to perform is the description (or captioning) of images. Much research in this area is heavily object-centric: our datasets tend to consist of descriptions of images based on the objects they contain and their spatial relationships, and this is reflected in the way models learn to describe visual scenes. On the other hand, research in human perception and language production suggests that people use object and spatial information to draw further inferences about what they see, and to describe it accordingly. For example, objects of a particular kind in a specific configuration (say, chairs arranged around a long table), are interpreted as composing a particular type of scene or location (say, a boardroom).
In this talk, I will discuss some of our research on image captioning and image-text understanding beyond object-centric captions. I will introduce a new dataset that explicitly aligns object-centric captions with scene-level descriptions, as well as descriptions of actions and rationales. Cross-modal ablation experiments suggest that pretrained Vision-Language models display variable capacity for aligning scene-level descriptions with images in zero-shot settings. On the other hand, finetuning generative models to generate scene descriptions results in measurable changes in the way they deploy attention over images, suggesting interesting parallels to some results from the psychology of perception. The discussion will then focus on a novel adaptation of SHAP-based explanations to this domain: by generating explanations at the caption (rather than the token) level, and using semantic visual priors, we are able to evaluate the extent to which models attend to the `right’ regions in generating descriptions of scenes and actions.
Gülşen Eryiğit is the top-cited researcher in Turkey in the field of Natural Language Processing. She is actively a senior action editor at the Association of Computational Linguistics (ACL) RR, a faculty member of the Artificial Intelligence and Data Engineering Department at Istanbul Technical University, the coordinator of ITU Natural Language Processing Group, and the director of ITU TÖMER. She has worked as a referee and author in many prestigious journals and conferences on NLP. In recent years, she has worked as a coordinator or researcher in many scientific projects funded by EU, Tubitak, and the Ministry of Industry and Technology, and as a consultant in several industrial R&D projects funded by EU and Tubitak TEYDEB. She also acts as a project evaluator and observer for these funding agencies. She owns one issued and one pending patent. She is the person who realized the first software export from the ITU Technology transfer office.
Albert Gatt is a full professor at the Natural Language Processing group of the Dept. of Information and Computing Sciences at Utrecht University. His research mostly focuses on the automatic generation of language from non-linguistic information (a.k.a. Natural Language Generation). One important aspect of this is how systems -- artificial or human -- learn meaningful relationships between language and the non-linguistic, especially the perceptual, world. Albert combines machine learning methods such as neural networks, and experimental psycholinguistic methods in his research. Some of the topics he explores are data-to-text generation, the vision-language interface, the production and generation of referring expressions, and the evaluation of NLP models. He has also worked quite a bit on NLP for Maltese, and introduced corpora and BERT models for Maltese.