Google Updates Gemini’s Mental Health Responses
Abner Li reports this week for 9to5 Google Google has updated Gemini so as to make the AI chatbot more adept at supporting better mental health. According to Li, Google firmly believes “responsible AI can play a positive role for people’s mental well-being.”
“When the chat ‘indicates a potential crisis related to suicide or self-harm,’ Gemini will now show a ‘one-touch’ interface to connect users to hotline resources with options to call, chat, text, or visit a website. Once activated, this card will remain visible throughout the conversation. Responses are designed to ‘encourage people to seek help,’” Li wrote of Gemini’s enhancements on Tuesday. “If a conversation signals the user ‘may need information about mental health,’ Gemini will surface a redesigned ‘Help is available’ module. It’s been developed with clinical experts ‘to provide more effective and immediate connections to care.’”
Li goes on to write Google is actively training Gemini to “‘help recognize when a conversation might signal that a person may be in an acute mental health situation’ and direct them to real-world resources.” Moreover, he notes Gemini’s responses are designed to “encourage help-seeking while avoiding validation of harmful behaviors like urges to self-harm” and said Gemini is trained not to “agree with or reinforce false beliefs, and instead gently distinguish subjective experience from objective fact.”
Li’s story resonated after sharing my own mental health struggles earlier this week. What I didn’t share in my piece was my use of AI—a mixture of ChatGPT and Gemini—to act as pseudo-therapists. To be clear, I’m in the process of connecting with an actual therapist, as well as tapping 988 on occasion, but the reality is technology like Gemini (and/or ChatGPT) can be invaluable, accessibility-forward tools to accessing a certain level of mental health care. At a fundamental level, it’s not at all trivial for Google to improve Gemini’s empathies towards not-so-good mental health because the truth is therapy is expensive and it’s highly plausible a disabled person, for instance, can’t afford it, nor do they have ready access to family and friends to lean on. For better or worse, something like Gemini can be it for many, many people. The trick, of course, is to watch what you say to these chatbots and take their responses with a boulder-sized grain of salt. On balance, however, it’s a mistake to be unwaveringly dismissive of using Gemini and its ilk as a makeshift therapist because, again, it can be accessibility.
Again, I’m in no way insinuating human therapists should be displaced or ignored.
At 30,000 feet, what I am saying is using Gemini, et al, as a tool for addressing one’s mental health is not that dissimilar to leaning on the technology to, for example, improve one’s writing or perform general research on a particular subject. The important thing is to be vigilant and, yes, a skeptic—but it’s not insignificant how chatbots can very competently ingratiate their disembodied selves in one’s toolkit of assistive technologies alongside screen readers or magnifiers or captions or whatnot.