News

Garber Announces Advisory Committee for Harvard Law School Dean Search

News

First Harvard Prize Book in Kosovo Established by Harvard Alumni

News

Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend

News

Harvard Faculty Appeal Temporary Suspensions From Widener Library

News

Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty

Researchers Discuss Implications of Using AI to Address Mental Health at Harvard Law School Webinar

Research co-authors Piers M. Gooding and Lydia X.Z. Brown discussed artificial intelligence in mental health treatment during a Wednesday webinar.
Research co-authors Piers M. Gooding and Lydia X.Z. Brown discussed artificial intelligence in mental health treatment during a Wednesday webinar. By Julian J. Giordano
By Nicole Y. Lu and Camilla J. Martinez, Contributing Writers

Research co-authors Piers M. Gooding and Lydia X.Z. Brown discussed the ethics of artificial intelligence in mental health treatment during a Wednesday evening Harvard Law School webinar.

Gooding and Brown, co-authors of the 2022 report “Digital Futures in Mind: Reflecting on Technological Experiments in Mental Health and Crisis Support” were joined by experts in artificial intelligence in medicine and mental health, including Carlos A. Larrauri, a psychiatric mental health nurse practitioner and member of the board of directors of the National Alliance on Mental Illness; Rhonda Moore, a program director at the National Institutes of Health; and Sara Gerke, an assistant professor at Penn State Dickinson Law.

The event was co-sponsored by the Harvard Law School Project on Disability, the Harvard Law School's Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, and GlobalMentalHealth@Harvard.

Brown, an adjunct lecturer and a core faculty member in the Disability Studies Program at Georgetown University, opened the webinar by contextualizing technological developments — such as social media surveillance — to address mental health concerns in the scope of systemic inadequacies.

Brown’s conversations with community members on social media revealed fear surrounding the dangers of sharing online information regarding mental health concerns.

“This fear is largely driven by a particular concern that data may be shared not only with the company providing a social media platform but with local law enforcement in dangerous and sometimes deadly attempts to intervene in a person’s mental health crisis by applying a carceral response,” they said.

Gooding, a research fellow at the Melbourne Law School, described the research group’s collaborative approach and central incorporation of people who “had drawn on lived experience with engaging with mental health services or experiencing mental health conditions.”

In response to the argument that regulatory and legal frameworks could stifle technological development, Gooding said “we came at it from the perspective that regulation is more about protecting people’s rights, both individually and collectively.”

Following Brown and Gooding’s discussion of the findings presented in their publication, the panelists individually spoke about their thoughts on the efficacy and implications of using technology to address mental health.

Larrauri began by sharing his personal journey with mental health struggles. He discussed the impact of these experiences in shaping his position as a proponent of artificial intelligence in mental health treatment, especially in early intervention.

“We must push for a patient-centered approach, grounded in robust, ethical principles,” Larrauri said.

Following Larrauri’s insights, Gerke provided an overview of both potential advantages and restrictions of using innovative technology to treat mental health concerns, weighing the broad accessibility of digital resources, the growing need for healthcare services, and objectivity against the privacy concerns surrounding artificial intelligence developed for these purposes.

“To conclude, AI mental health apps and chatbots are promising, but also raise several ethical and also legal challenges that we should address before releasing them uncontrolled on the market and potentially harming patients,” she said.

Moore spoke next on the report’s lack of coverage of socioeconomic disparities exacerbated by AI in what she described as the Global South and the Global North. Looking towards the future, she stressed the importance of ethnographic exploration in the Global South in “postcolonial computing, decolonial computing, and data extractivism.”

In a lively discussion following the panel, Brown addressed a question regarding how to set ethical boundaries in AI-driven mental health initiatives.

“It is impossible to divorce use cases from social, cultural, and political structures and realities,” they said, referencing the larger context of existing systemic issues.

In an interview following the event, Gooding said he hopes their research “helps to clear the fog of hype and promote a very sober and clear-eyed public discussion about the possibilities and perils of data-driven and algorithmic technology in the mental health context.”

Correction: March 27, 2023

A previous version of this article incorrectly stated that webinar was solely sponsored by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics. In fact, the event was co-sponsored by the Harvard Law School Project on Disability, the Harvard Law School's Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, and GlobalMentalHealth@Harvard.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
Harvard Law SchoolMental Health