In a world of big data and computational resources, there has been a growing interest in further validating computational models of decision making by subjecting them to more rigorous constraints. One prominent area of study is model-based cognitive neuroscience, where measures of neural activity are explained and interpreted through the lens of a cognitive model. Although some early work has developed the statistical framework for exploiting the covariation between brain and behavior through factor analysis linking functions, current methods are still far from providing parsimonious accounts of high-dimensional (e.g., voxel-level) data. In this article, we contribute to this endeavor by investigating the fidelity of regularization methods such as the Lasso. Here, a combination of local and global penalty terms are applied to pressure elements of the factor loading matrix toward zero, reducing the false alarm rate. Such penalties facilitate the emergence of parsimonious network structure in the study of neural activation, giving way to clearer interpretations of high-dimensional data. We show through a set of three simulation studies and one application to real data that the Lasso can be an effective regularization method in the context of linking complex patterns of brain data to theoretical explanations of decisions. Although our analyses are specific to linking brain to behavior, the structure of the model is invariant to the type of high-dimensional data under investigation.
Bibliographical notePublisher Copyright:
© 2021. American Psychological Association
All Science Journal Classification (ASJC) codes
- Psychology (miscellaneous)