g , a scene containing a dog must also contain a canine) Finally

g., a scene containing a dog must also contain a canine). Finally, we used regularized linear regression (see Experimental Procedures for details; Kay et al., 2008; Mitchell et al., 2008; Naselaris et al., 2009; Nishimoto et al., 2011) to characterize the response FG-4592 in vitro of each voxel to each of the 1,705 object and action categories (Figure 1). The linear regression procedure produced a set of

1,705 model weights for each individual voxel, reflecting how each object and action category influences BOLD responses in each voxel. Our modeling procedure produces detailed information about the representation of categories in each individual voxel in the brain. Figure 2A shows the category selectivity for one voxel located in the left parahippocampal place area (PPA) of subject A.V. The model for this voxel shows that

BOLD responses are strongly enhanced by categories associated with man-made objects and structures (e.g., “building,” “road,” “vehicle,” and “furniture”), weakly enhanced by categories associated with outdoor scenes (e.g., “hill,” “grassland,” and “geological formation”) GSK1349572 cost and humans (e.g., “person” and “athlete”), and weakly suppressed by nonhuman biological categories (e.g., “body parts” and “birds”). This result is consistent with previous reports that PPA most strongly represents information about outdoor scenes and buildings (Epstein and Kanwisher, 1998). Figure 2B shows category selectivity for a second voxel located in the right precuneus (PrCu) of subject A.V. The model shows that BOLD responses are strongly enhanced by categories associated with social settings (e.g., people, communication verbs, and rooms) and suppressed by many other categories (e.g., “building,” “city,” “geological formation,” and “atmospheric phenomenon”). This result is consistent with an earlier finding that PrCu is involved in processing social scenes (Iacoboni et al., 2004). We used principal components analysis (PCA) to recover a semantic space from the category model weights in each subject. PCA ensures that categories that are represented by similar sets of cortical voxels will project to nearby points in the estimated semantic

space, while categories that are Mannose-binding protein-associated serine protease represented very differently will project to different points in the space. To maximize the quality of the estimated space, we included only voxels that were significantly predicted (p < 0.05, uncorrected) by the category model (see Experimental Procedures for details). Because humans can perceive thousands of categories of objects and actions, the true semantic space underlying category representation in the brain probably has many dimensions. However, given the limitations of fMRI and a finite stimulus set, we expect that we will only be able to recover the first few dimensions of the semantic space for each individual brain and fewer still dimensions that are shared across individuals.

Comments are closed.