The neural basis of semantic cognition: Evidence from neuropsychology, neuroimaging and neurostimulation

Friday, 2017, April 21 - 12:00
University of York

Abstract

Over the last few years, neuropsychological, functional neuroimaging and transcranial magnetic stimulation (TMS) studies have supported the view that a complex, distributed neural network underpins semantic cognition. This talk traces the putative roles of each region within this network. Comparisons of patients who have semantic dementia (SD) and multimodal semantic impairment following stroke aphasia (SA) indicate that semantic cognition draws on at least two interacting components – semantic representations (degraded in SD) and semantic control processes (deficient in SA). To explore the first of these components, we have employed distortion-corrected fMRI and TMS in healthy volunteers: these studies convergently indicate that the anterior temporal lobes (ATL; atrophied in SD) combine information from different modalities within an amodal semantic “hub”. Modality-specific sensory and motor cortices also make a critical contribution to knowledge within particular categories. This network of brain regions interacts with semantic control processes reliant on left inferior frontal (LIFC) and posterior middle temporal gyrus (pMTG). SA patients with damage to these regions have difficulty focusing on aspects of knowledge that are relevant to the current goal or context, in both verbal and non-verbal semantic tasks (such as object use). Convergent evidence is again provided by fMRI and TMS: both these methods show that LIFC and pMTG act together as a distributed network that lies between domain-general executive regions and the default mode network which, when unconstrained by other networks, supports more automatic aspects of semantic retrieval.

Reference

Lambon Ralph, Jefferies, Patterson & Rogers (2017). The neural and computational bases of semantic cognition. Nature Reviews Neuroscience, 18, 42–55 (pdf)