My students and I are interested primarily in the representation and processing of relations. Along with language, the ability to perceive, represent and reason about relations is the major factor distinguishing human thinking from the cognitive abilities of all other animals: It is what makes us the dominant species on the planet. How can neural architectures (brains or artificial neural networks) generate, represent and manipulate relational structures? How does the human mind's solution to this problem manifest itself in observable behavior? My collaborators, students and I investigate these issues in several domains. One line of research concerns the representation of relational structure in visual perception: How do we represent the relations among an object's parts or features? Under what circumstances, and in what form, will we explicitly encode these relations, and how does our encoding affect the manner in which we recognize and categorize objects? What is the role of visual attention in the representation of object shape? We address these questions both empirically, by conducting experiments on object perception, recognition and categorization with human subjects, and theoretically, by developing and testing computational (symbolic neural network) models of object perception and recognition. Another line of research concerns the representation and processing of relational information in thinking and reasoning. We have developed a symbolic neural network model--LISA (Learning and Inference with Schemas and Analogies; Hummel & Holyoak, 1997, 2003, Psychological Review)--of analogical mapping, analogy- and rule-based inference, and schema induction. We recently generalized the LISA model to account for aspects of cognitive development, especially the development of relational concepts and relational representations (the DORA model; Doumas, Hummel & Sandhofer, 2008, Psych. Review). Our group is also interested in how adults learn relational concepts and categories -- especially relational categories that have a "family resemblance" structure, in which no single relation is shared by all members of the category. We also investigate how people generate explanations, and how the resulting explanations affect judgements of likelihood. In this domain, as in the others, we both build computational models and conduct experiments with human subjects.
PhD, University of Minnesota
Bowers, J. S., Malhotra, G., Dujmovic, M., Montero, M. L., Tsvetkov, C., Biscione, V., Puebla, G., Adolfi, F., Hummel, J. E., Heaton, R. F., Evans, B. D., Mitchell, J., & Blything, R. (Accepted/In press). Deep Problems with Neural Network Models of Human Vision. Behavioral and Brain Sciences. https://doi.org/10.1017/S0140525X22002813
Doumas, L. A. A., Puebla, G., Martin, A. E., & Hummel, J. E. (2022). A Theory of Relation Learning and Cross-Domain Generalization. Psychological review, 129(5), 999-1041. https://doi.org/10.1037/rev0000346
Du, A. Y., Hummel, J. E., & Petrov, A. A. (2022). Exaggeration of Stimulus Attributes in the Representation of Relational Categories. 3333-3338. Paper presented at 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022, Toronto, Canada.
Du, A. Y., Hummel, J. E., & Petrov, A. A. (2021). Probing the Mental Representation of Relation-Defined Categories. 2024-2030. Paper presented at 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021, Virtual, Online, Austria.
Doumas, L. A. A., Puebla, G., Martin, A. E., & Hummel, J. E. (2020). Relation learning in a neurocomputational architecture supports cross-domain transfer. 932-937. Paper presented at 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020, Virtual, Online.