Research

I am broadly interested in how people understand the emotions and other mental states of those around them (Affective and Social Cognition). My primary approach is to studying such reasoning is by building computational cognitive models. That is, I investigate how people intuitively reason about those around them, and try to codify such reasoning using computational models (usually, via probabilistic approaches).

Computational cognitive modeling (i) allows researchers to specify and test precise, quantitative hypotheses about cognition and affect, and (ii) opens the doors to many applications, such as enabling computers to "reason" about emotions and mental states in a human-like manner.

In my work, I take an interdisciplinary approach, theoretically grounded in cognitive science and affective science, and using tools from computer science (probabilistic modeling; machine learning; natural language processing; social network analysis).

Understanding how humans reason about emotions and mental states (top) lets us build computational models and artificial intelligence that can similarly reason about emotions and mental states (right arrow). Furthermore, having better computational models (bottom) will allow us to ask more precise questions about the nature of human cognition (left arrow). This results in a virtuous cycle where scientific progress in psychology fuels progress in artificial intelligence which in turn fuels more progress in psychology.



Core Research Directions

Modeling Affective Cognition

I build computational models of how people reason about emotions.
[Figure adapted from Ong et al., 2015]

When we see someone miss a bus, receive a present, or walk with a skip in their step, we have no difficulty inferring their emotions, their thoughts, and even what they might do next. The ability to effortlessly perform such reasoning, Affective Cognition (reasoning about affect), is crucial to our everyday lives. My research seeks to understand the cognitive mechanisms that underlie such complex reasoning.

Current projects include:
  • Models of how people integrate emotional cues
  • How people perform third-person appraisals
Sample publications:
  • Goel, S., Jara-Ettinger, J., Ong, D. C., & Gendron, M. (2024). Integration of facial and contextual cues in emotion inferences is limited and variable across categories and individuals. Nature Communications, 15, 2443.

  • Ong, D. C., Zaki, J., & Goodman, N. D. (2019). Computational models of emotion inference in Theory of Mind: A review and roadmap. Topics in Cognitive Science. 11(2), 338-357.

  • Ong, D. C., Zaki, J., & Goodman, N. D. (2015). Affective Cognition: Exploring lay theories of emotion. Cognition, 143, 141-162.


Appraisals and Emotions

A central component of theories of affective cognition is appraisals --- how people make sense of their experiences. With my students and collaborators, we have been studying these appraisals, such as how appraisal knowledge scaffolds third-person emotion understanding, and how people acquire such knowledge. This work also provided some of the theoretical basis for other work from my lab in AI (e.g., building AI to offer reappraisals, see below).

Current projects include:
  • Theoretical development of appraisals;
  • Developmental models of affective cognition.
Sample publications:
  • Yeo, G. & Ong, D. C. (2024). Associations Between Cognitive Appraisals and Emotions: A Meta-Analytic Review. Psychological Bulletin, 150(12), 1440–1471.

  • Doan, T., Ong, D. C., & Wu, Y. (2025). Emotion understanding as third-person appraisals: Integrating appraisal theories with developmental theories of emotion. Psychological Review, 132(1), 130–153.
    [Editor's Choice Award 🏆 ]

  • Asaba, M.*, Ong, D. C.*, & Gweon, H. (2019). Integrating expectations and outcomes: Preschoolers' developing ability to reason about others' emotions. Developmental Psychology, 55(8), 1680-1693.

Empathic Accuracy

We study how people understand the emotions of others at a fine-grained, second-by-second level using naturalistic videos.
[Figure from Devlin et al., 2016]

Are people accurate in their understanding of the emotions of those around them? What are the computational, psychological, and neural bases that support accurate emotion judgments?

In addition to using computational modeling and machine learning, we also study empathic accuracy (i) with neuroscientific approaches (fMRI, EEG), (ii) across demographic groups and across cultures, and (iii) in populations with mood disorders. I am especially interested in multimodal emotion understanding from naturalistic stimuli, and how such understanding is contextualized.

Sample publications:
  • Ong, D. C., Wu, Z., Zhi-Xuan, T., Reddan, M., Kahhale, I., Mattek, A., & Zaki, J. (2021). Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset. IEEE Transactions on Affective Computing, 12(3), 579-594.

  • Genzer, S.*, Ong, D. C.*, Zaki, J., & Perry, A. (2022). Mu rhythm suppression over sensorimotor regions is associated with greater empathic accuracy. Social Cognitive and Affective Neuroscience, 17(9), 788–801.


LLMpathy (LLM-generated empathy)

The latest AI technologies (e.g., Large Language Models) have produced AI agents that can produce text that people find to be very supportive --- what I call "LLMpathy". This opens up many interesting questions, from psychology, to NLP, to ethics; and my group (and with many collaborators) are approaching these questions from many different angles.

Current projects include:
  • Theoretical refinement in perceived empathy
  • Psychological factors affecting people's interactions with and perceptions of LLMpathy / AI agents
  • Analyzing the language of empathy
  • Technical contributions (e.g., making these models better). For instance, using LLMs to offer targeted appraisals.
Sample publications:
  • Rubin, M., Li, J., Zimmerman, F., Ong, D. C., Goldenberg, A., & Perry, A. (in press). Comparing the Value of Perceived Human versus AI-Generated Empathy. Nature Human Behaviour.

  • Lee, Y. K., Suh, J., Zhan, H., Li, J. J., & Ong, D. C. (2024). Large Language Models produce responses perceived to be empathic. In Proceedings of the 12th IEEE International Conference on Affective Computing and Intelligent Interaction (ACII).

  • Zhan, H., Zheng, A., Lee, Y. K., Suh, J., Li, J. J., & Ong, D. C. (2024). Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided. In Proceedings of the 1st Conference on Language Modeling (COLM).

Collaborators in this line of work include: Jessy Li, Jina Suh; Tatiana Lau; Anat Perry, Amit Goldenberg

Interventions

We use social psychological principles to increase people's motivation to empathize
[Figure from intervention described in
Weisz et al., 2020]

We have been using social psychological interventions to increase empathy (by increasing motivation to empathize with others).
In addition to interventions to increase empathy, we are also examining interventions for emotion regulation.

Sample publications:
  • Weisz, E., Chen, P., Ong, D. C., Carlson, R. W., Clark, M. D., & Zaki, J. (2022). A Brief Intervention to Motivate Empathy among Middle School Students. Journal of Experimental Psychology: General, 151(12), 3144–3153.

  • Weisz, E., Ong, D. C., Carlson, R. W., & Zaki, J. (2021). Building Empathy: A Brief Intervention to Promote Social Connection. Emotion, 21(5), 990–999

Collaborators in this line of work include: Patricia Chen, Erika Weisz, Jamil Zaki

More psychologically-precise machine learning

We use various approaches to improve machine learning models for emotion recognition.
[Figure from a talk given in Nov 2021]

How can we train machine learning models to recognize human emotions accurately and in context? Can we use such technology to improve mental health and emotional well-being, in domains such as in education?

Although my research involves multiple modalities (images, speech, etc.), most of my research specifically deal with understanding emotions from natural language.

Projects include:
  • Building better (contextualized; multimodal; fine-grained; time-series) machine learning models for emotion recognition
  • Integrating theory into deep learning models (e.g., using probabilistic programming)
Sample publications:
  • Ong, D. C., Soh, H., Zaki, J., & Goodman, N. D. (2021). Applying Probabilistic Programming to Affective Computing. IEEE Transactions on Affective Computing, 12(2), 306-317.
    [Best of IEEE Transactions on Affective Computing 2021 Paper Collection 🏆]

  • Suresh, V., & Ong, D. C. (2021). Not all negatives are equal: Label-Aware Constrastive Loss for fine-grained text classification. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021).


AI Ethics

I am especially interested in ethical affective computing: how should we build and deploy these systems in an ethical manner?
How should current AI agents (LLMs) be deployed, in specific use-cases like AI Therapy?

Sample publications:
  • Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C.*, & Haber, N.* (2025). Expressing stigma and inappropriate responses prevent LLMs from safely replacing mental health providers. In 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT).

  • Ong, D. C. (2021). An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence. In Proceedings of the 9th International Conference on Affective Computing and Intelligent Interaction (ACII 2021).
    [Best Paper Award 🏆]

We are studying various ways to use large language models for psychologically-precise outcomes.
[Figure from an earlier version of Demszky et al. (2023)]

We are interested in using large language models in psychology for a variety of different outcomes (e.g., well-being, motivation).

For instance, with colleagues like David Yeager, we are studying how to use large language models to provide mindset-supportive language

Sample publications:
  • Demszky*, D., Yang*, D., Yeager*, D. S., Bryan, C. J., Clapper, M., Eichstaedt, J. C., Hecht, C., Jamieson, J., Johnson, M., Jones, M., Krettek-Cobb, D., Lai, L., JonesMitchell, N., Ong, D. C., Dweck^, C. S., Gross^, J. J., & Pennebaker^, J. W. (2023). Using Large Language Models in Psychology. Nature Reviews Psychology. https://doi.org/10.1038/s44159-023-00241-5

  • Choudhury, M.*, Elyoseph, Z.*, Fast, N. J.*, Ong, D. C.*, Nsoesie, E. O.* & Pavlick, E.* (2025). The Promise and Pitfalls of Generative AI for Psychology and Society. Nature Reviews Psychology. 4, 75-80.

  • Hecht, C. A.*, Ong, D. C.*, Clapper, M., Jones, M., Demszky, D., Yang, D., Eichstaedt, J., Bryan, C. J., & Yeager, D. S. (accepted). Using Large Language Models in Behavioral Science Interventions: Promise and Risk. Behavioral Science & Policy.

The Strategic Resource Use Intervention increases students' exam performance in two randomized controlled trials.
[Figure from Chen et al., 2017]

In collaboration with Patricia Chen and the Motivation and Self-Regulation lab, we have been studying self-regulated learning in students.

Sample publications:
  • Chen, P., Teo, D. W. H., Foo, D. X. Y., Derry, H. A., Hayward, B. T., Schulz, K. W., Hayward, C., McKay, T. A., & Ong, D. C. (2022). Real-World Effectiveness of a Social-Psychological Intervention Translated from Controlled Trials to Classrooms. npj Science of Learning, 7 (20).

  • Chen, P.*, Ong, D. C.*, Ng, J., & Coppola, B. P. (2021). Explore, Exploit, and Prune in the Classroom: Strategic Resource Management Behaviors Predict Performance. AERA Open, 7(1), 1–14.

  • Chen, P., Chavez, O., Ong, D. C., & Gunderson, B. (2017). Strategic Resource Use for Learning: A Self-administered Intervention that Guides Effective Resource Use Enhances Academic Performance. Psychological Science, 28(6), 774-785.

Collaborators in this line of work include: Timothy McKay and the ECoach team at the University of Michigan, Brian Coppola.