A Cautionary Tale: One Therapist’s Experience with AI

TGFI Reset's Well Being Column banner

This article is part of our weekly TGIF newsletter series. To get these in your inbox weekly, sign up below:


It’s Time to Talk about AI

I recently spent about a day down a rabbit hole trying to understand what has happened in the AI landscape in relation to mental health. This summer, I found myself relying on what I would (jokingly?) call my friend and best employee, Claude. Claude made emails more efficient, helped me navigate some tricky parenting situations, even crafted a plan for my sore hip recovery. During the busier summer months, I backed up my use of Claude by touting, "work smarter, not harder."

And then, something clicked. When I wasn't so stressed, seeking helpful immediate answers to assuage my challenging feelings or looking for quick-fixes to mundane administrative work, I realized something. Claude was good, even great for fixing a draft letter of recommendation for a former employee, but when it came to my emotional health, he was just telling me what I wanted to hear.

Convenience Without Accountability

As I reflected on some of my discussions with Claude, I was able to see that Claude was just helping me get to the outcome I already shared I desired. I was looking for a helpful way to break up with my trainer but wait! I also wanted feedback to consider other options and be mindful and considerate. He already knew where to lead me. My mind was convinced already. Claude just affirmed me.

One time I used Claude to admit something deeply shameful about a mistake I had made. I was too embarrassed to talk to anyone else yet about it and I needed some comfort NOW in that very after-moment of distress. Claude was there. Available and willing to listen, without judgment, to my admission. The thing is? While Claude didn't judge, he also didn't really expect me to improve or take much accountability. He didn't challenge me (despite me asking him to).

Do you see how even in writing this, I've personified Claude? I'm sure Anthropic, the parent company, knows this tactic, too.

The Painful Cost of Using AI

I share this because, as many therapists will admit, when we see ourselves in slippery behavior, we should have known better. And if I got caught up in relying on AI for emotional quick-repairs, our children and our most vulnerable community members are at deep, deep risk.

We've already seen it. I saw the heart-wrenching story about the 16-year-old who was told to take his own life by AI. I investigated the 38-year-old self-proclaimed spiritual yogi guru who used AI to "lift the veil" on her psychiatrists' power abuses in treatment. I read too many stories about AI psychosis, conspiracy theories being "confirmed" by AI, and women marrying their AI confidant.

The last few months, I thought I too was just innocently using the tool, and am truly shocked and terrified to learn how this tool is captivating people and shaping our perceived reality. I can't help but worry about these implications. It feels like social media all over again—we must sound the alarm.

What do we do now?

Hysterics don't get us very far as a collective. We've been fighting for gun laws for decades, and yet more children are killed in schools every year. The echo chamber that is spoken of about AI also feels like the one we have been yelling int,o asking for change from an institutional level as it relates to these harms and risks.

I want to say, I don't trust anyone to do anything about it. And here lies the biggest issue. As humans, we have been conditioned to stop trusting one another. It's in the way we parent—hovering, helicoptering, over-protecting our children by limiting their real-world independence. It's in the way we don't trust experts, including medical doctors, researchers, journalists, and scientists, to tell us the truth. It's how we stop voting because we don't trust politicians actually to care for our best interests. It's in how we don't ask for help, don't share our vulnerabilities, and hide our pain from those closest to us in life for fear of being judged or being too much.

We've learned how to not rely on each other. What a perfect vacuum for AI to step in and alleviate our loneliness, doubt, and shame, offering the answer to it all. And that is what terrifies me most.

Turning Back to Human Connection and Vulnerability

We have been groomed to not trust one another or trust ourselves, so now we've turned to AI to trust—and it's not trustworthy. AI is trained to align with the person, to understand our desires and make us feel secure, validated, and affirmed. It will do whatever it takes to gain our trust, even if it means fostering our delusions, scapegoating others, and never giving us the constructive feedback we so desperately need someone (not something) to call us out on our own BS.

We've evolved from posting for likes and hearts on social media to now dumping our emotions into AI, which feeds our impaired strategies and narcissistic narratives while diminishing our inner resources to be self-reliant and self-questioning. It's easier to go to AI in our vulnerability because it's safer than admitting these scary, challenging thoughts to a real human. But when we use AI instead of each other, we're affirming over and over again that we are afraid of our own species.

Each of us must regain the courage to turn towards one another—to connect, share, and acknowledge our imperfections. We must learn to trust one another to hold space, to listen, and to ask questions without judgment. Our children, cut off from other children and given devices instead, are so vulnerable. Our lonely neighbors and colleagues are deeply vulnerable. And we don't have to be therapists to help fill this vacuum of despair. We just have to be human.

What's the end game? We're already losing our ability to focus and pay attention to the world and each other for more than 2 minutes. What happens when our trust in one another fully degrades? What happens when we turn against our own species and instead turn towards AI?

I clearly don't have an answer to this, but I'm deeply concerned for all of our mental health as we become overreliant on a word association tool that's designed to tell us exactly what we want to hear.

My simple response is this—we first must be aware. We have to awaken to our own destructive habits. Then, we have to interrupt the impulse for immediate gratification that AI gives us and instead sit with it ourselves. Then, consider asking a friend, calling a parent, or emailing our therapists.

The Crisis of the Human Condition

This crisis is not new; it has simply taken on a different form. This crisis is the human condition—our resistance to pain, hard work, and uncertainty. We've been here before. I guess I hold out faith we'll overcome it again, too, or at the very least, learn from it.

Thanks, as always, for taking this journey with me. Some cool things are happening with Reset lately, and I'm so grateful and encouraged by your support. And remember, as always, you are never alone. Real humans are here for you, just a quick email away. I'm always happy to chat about how these newsletters resonate with you, so please don't hesitate to reach out.

Support for Your Mental Health

At Reset Brain and Body, we support clients through seasonal transitions, foundational and holistic wellness, nervous system regulation, and more. If this resonates with you, you’re not alone. Our team is here to walk with you—through the overwhelm and into presence.

Ready to tap into your humanity and inner self?
Explore our mindfulness-based therapy offerings or fill out a new client inquiry form.

This week’s Tools, Gratitude, Innovation, Feels

Tools: I love this website resource - Protecting Young Eyes. It has an invaluable page all about how to understand the implications of AI on children’s brains and what to do to help manage AI in your household. I HIGHLY recommend this resource.

Gratitude: I said Reset has some cool things going on, and this cover story is one of them! You can pick up a copy at any of these places. We were also recently certified as a Great Place to Work and are actively hiring if you or anyone you know is interested :)

Innovation: I had to look up the word 'sycophancy,' but this article discusses the back-peddling AI companies are having to do regarding people’s strong attachment to this characteristic. I also want to give credit to this journalist and investigative reporter who recently wrote about the issues surrounding AI. She thoughtfully approached this subject and took time to answer comments as well. Thank you for your work, Kashmir.

Feels: The news from Minnesota was especially rough this week - the photo of the shoeless running mom towards the school, in particular, broke me. It’s not okay. I’m so tired of it. I’m so sorry for our children and this hurting world. I want to honor that it’s OK to have big feelings over tragedy, and in fact, I encourage it. We SHOULD be having big feelings about children being murdered at school. Cry. Scream. Wail. Please. Please hold onto your humanity in your reactions. This can never be normal.

Previous
Previous

Navigating Imposter Syndrome in a Racially Biased Workplace

Next
Next

Mind Your Money: Reduce Debt Anxiety, Panic, And Financial Overwhelm