Research

I am broadly interested in how people understand the emotions and other mental states of those around them (Affective and Social Cognition). In addition, I study human empathy, especially the mechanisms that support accurate empathic inferences.
But beyond studying how humans understand others' emotions, I apply my expertise in affective science to studying how machines can understand others' emotions, and how they might even offer "empathy". My work is varied and interdisciplinary, crossing affective science, cognitive science, and artificial intelligence---especially natural language processing and machine learning.

Affective Cognition

Modeling Affective Cognition

When we see someone miss a bus, receive a present, or walk with a skip in their step, we have no difficulty inferring their emotions, their thoughts, and even what they might do next. The ability to perform such reasoning, Affective Cognition (reasoning about affect), is crucial to our everyday lives. My research seeks to understand the cognitive mechanisms that underlie such complex reasoning.

Sample publications:
Goel, S., Jara-Ettinger, J., Ong, D. C., & Gendron, M. (2024). Integration of facial and contextual cues in emotion inferences is limited and variable across categories and individuals. Nature Communications, 15, 2443.
Ong, D. C., Zaki, J., & Goodman, N. D. (2019). Computational models of emotion inference in Theory of Mind: A review and roadmap. Topics in Cognitive Science. 11(2), 338-357.
Ong, D. C., Zaki, J., & Goodman, N. D. (2015). Affective Cognition: Exploring lay theories of emotion. Cognition, 143, 141-162.
Affective cognition model

I build computational models of how people reason about emotions. [Figure adapted from Ong et al., 2015]

Appraisals and Emotions

A central component of theories of affective cognition is appraisals --- how people make sense of their experiences. With my students and collaborators, we have been studying these appraisals, such as how appraisal knowledge scaffolds third-person emotion understanding, and how people acquire such knowledge. This work also provided some of the theoretical basis for other work from my lab in AI (e.g., building AI to offer reappraisals, see below).

Sample publications:
Doan, T., Ong, D. C., & Wu, Y. (2025). Emotion understanding as third-person appraisals: Integrating appraisal theories with developmental theories of emotion. Psychological Review, 132(1), 130–153.
[Editor's Choice Award 🏆 ]
Yeo, G. & Ong, D. C. (2024). Associations Between Cognitive Appraisals and Emotions: A Meta-Analytic Review. Psychological Bulletin, 150(12), 1440–1471.

Empathy

LLMpathy: LLM-generated empathy

The latest AI technologies (e.g., Large Language Models) have produced AI agents that can produce text that people find to be very supportive --- what I call "LLMpathy". This opens up many interesting questions, from psychology, to NLP, to ethics; and my group (and with many collaborators) are approaching these questions from many different angles.

Sample publications:
Rubin, M., Li, J., Zimmerman, F., Ong, D. C., Goldenberg, A., & Perry, A. (2025). Comparing the Value of Perceived Human versus AI-Generated Empathy. Nature Human Behaviour, 9, 2345–2359.
Lee, Y. K., Suh, J., Zhan, H., Li, J. J., & Ong, D. C. (2024). Large Language Models produce responses perceived to be empathic. In Proceedings of the 12th IEEE International Conference on Affective Computing and Intelligent Interaction (ACII).
Zhan, H., Zheng, A., Lee, Y. K., Suh, J., Li, J. J., & Ong, D. C. (2024). Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided. In Proceedings of the 1st Conference on Language Modeling (COLM).

Empathic Accuracy

Are people accurate in their understanding of the emotions of those around them? What are the computational, psychological, and neural bases that support accurate emotion judgments?

In addition to using computational modeling and machine learning, we also study empathic accuracy (i) with neuroscientific approaches (fMRI, EEG), (ii) across demographic groups and across cultures, and (iii) in populations with mood disorders. I am especially interested in multimodal emotion understanding from naturalistic stimuli, and how such understanding is contextualized.

Sample publications:
Genzer, S.*, Ong, D. C.*, Zaki, J., & Perry, A. (2022). Mu rhythm suppression over sensorimotor regions is associated with greater empathic accuracy. Social Cognitive and Affective Neuroscience, 17(9), 788–801.
Ong, D. C., Wu, Z., Zhi-Xuan, T., Reddan, M., Kahhale, I., Mattek, A., & Zaki, J. (2021). Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset. IEEE Transactions on Affective Computing, 12(3), 579-594.

AI and Emotions (including Affective Computing)

AI Ethics

I am especially interested in AI ethics and especially with respect to emotions: how should we build and deploy these systems in an ethical manner?
How should current AI agents (LLMs) be deployed, in specific use-cases like AI Therapy?

Sample publications:
Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C.*, & Haber, N.* (2025). Expressing stigma and inappropriate responses prevent LLMs from safely replacing mental health providers. In 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT).
Ong, D. C. (2021). An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence. In Proceedings of the 9th International Conference on Affective Computing and Intelligent Interaction (ACII 2021).
[Best Paper Award 🏆]

Using Large Language Models in Psychology

LLMs in psychology

We are studying various ways to use large language models to expand psychological research, for instance by generating and delivering tailored interventions.

Sample publications:
Demszky*, D., Yang*, D., Yeager*, D. S., Bryan, C. J., Clapper, M., Eichstaedt, J. C., Hecht, C., Jamieson, J., Johnson, M., Jones, M., Krettek-Cobb, D., Lai, L., JonesMitchell, N., Ong, D. C., Dweck^, C. S., Gross^, J. J., & Pennebaker^, J. W. (2023). Using Large Language Models in Psychology. Nature Reviews Psychology. https://doi.org/10.1038/s44159-023-00241-5
Choudhury, M.*, Elyoseph, Z.*, Fast, N. J.*, Ong, D. C.*, Nsoesie, E. O.* & Pavlick, E.* (2025). The Promise and Pitfalls of Generative AI for Psychology and Society. Nature Reviews Psychology. 4, 75-80.
Hecht, C. A.*, Ong, D. C.*, Clapper, M., Jones, M., Demszky, D., Yang, D., Eichstaedt, J., Bryan, C. J., & Yeager, D. S. (2025). Using Large Language Models in Behavioral Science Interventions: Promise and Risk. Behavioral Science & Policy, 11(1), 1–9.