高级检索
当前位置: 首页 > 详情页

Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

文献详情

资源类型:
WOS体系:
Pubmed体系:

收录情况: ◇ SCIE ◇ SSCI

机构: [1]Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China [2]Department of Radiology, Guangdong General Hospital, Guangzhou 510080, China [3]Department of MR, Foshan Hospital of Traditional Chinese Medicine, Foshan 528000, China and 4Department of Psychology, Tsinghua University, Beijing 100084, China
出处:
ISSN:

关键词: audiovisual face perception brain pattern crossmodal integration decoding feature-selective attention reproducibility

摘要:
Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

基金:
语种:
被引次数:
WOS:
PubmedID:
中科院(CAS)分区:
出版当年[2014]版:
大类 | 1 区 医学
小类 | 2 区 神经科学
最新[2025]版:
大类 | 3 区 医学
小类 | 3 区 神经科学
JCR分区:
出版当年[2013]版:
Q1 NEUROSCIENCES
最新[2023]版:
Q2 NEUROSCIENCES

影响因子: 最新[2023版] 最新五年平均 出版当年[2013版] 出版当年五年平均 出版前一年[2012版] 出版后一年[2014版]

第一作者:
第一作者机构: [1]Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China [*1]Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, 510640, China.
通讯作者:
通讯机构: [1]Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China [*1]Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, 510640, China. [*2]Department of Psychology, Tsinghua University, Beijing, 100084, China.
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:2018 今日访问量:0 总访问量:645 更新日期:2024-07-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 广东省中医院 技术支持:重庆聚合科技有限公司 地址:广州市越秀区大德路111号