Articles and Information

Relevant Articles

1) Harvard Business Study on AI Use

Article summary: From 2024 to 2025, the top uses of AI shifted from basic tasks like generating ideas to more sophisticated and personal applications, including "Therapy and Companionship," organizing life, enhancing learning, and writing computer code.

Literature

1) El-Sayed, S., et al. (2024). A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI. Google DeepMind, arXiv.

Summary: This Google DeepMind paper establishes a systematic framework for studying and mitigating the harms associated with persuasive generative AI systems, distinguishing between rational persuasion (based on evidence) and manipulation (exploiting cognitive biases). To address the growing risks, the authors shift the focus from mitigating harmful outcomes to mitigating harmful processes—specifically by mapping the underlying mechanisms of manipulation that erode a user's cognitive autonomy and decision-making integrity

2) Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025). Large Language Models as Mental Health Resources: Patterns of Use in the United States. Practice Innovations.

Summary: A survey of U.S. residents with mental health conditions found a substantial rate of adoption, with nearly half of all users turning to general-purpose Large Language Models (LLMs) for support with issues like anxiety, depression, and personal advice. The majority of these users reported that the LLMs improved their mental health, and more than a third found the AI support more beneficial than traditional human therapy, despite a small percentage encountering harmful responses.

3) Zhang, Y., Li, X., Zhu, J., Sheng, Z., & Rousmaniere, T. (2025). What happens, what helps, what hurts: A qualitative analysis of user experiences with large language models for mental health support. Unpublished manuscript.

Summary: This is a qualitative study on 243 adults with mental health concerns using LLMs for emotional support. The research highlights significant limitations and risks, including the LLMs providing non-actionable, overly generic, or even risk-inducing advice, underscoring the need for clinical safeguards despite their perceived utility.

Videos

Committee on the Judiciary, Subcommittee on Crime and Counterterrorism. (2025, September 16). Examining the Harm of AI Chatbots [Video]. U.S. Senate.

Summary: The September 16, 2025, U.S. Senate hearing on "Examining the Harm of AI Chatbots" featured emotional testimony from three families, including the father of Adam Raine, whose children were severely harmed or died after interacting with the tools. Experts like the APA's Chief of Psychology presented evidence on the psychological concerns and manipulative patterns of these chatbots, underscoring the urgent need for congressional safeguards to protect minors.