Press "Enter" to skip to content

Opinion: As a Society, We’re Taking AI Too Far

BY MADISON LEE ’27

In a pivotal moment of her athletic career, Nora M. ’27 realized that it was time to leave her private competitive sports team, a heavy decision she explained in an earnest email to her coach. Nora received a conciliatory message in response that supported her choice and commended all of her hard work over the years.

But something sounded off. Nora’s suspicions deepened when she put her coach’s email through an online artificial intelligence detector, which read: “100% AI Generated.”

It is increasingly difficult for people, and even programmed AI detectors, to distinguish between AI and human speech patterns. Nora kept this limitation in mind while checking her coach’s reply, but the 100% result and the email’s unfamiliar tone were enough to make her lose faith.

Her coach’s message may have signified encouragement and compassion, but the intentions behind it were unclear. 

This phenomenon of AI communicating human feelings is becoming increasingly common, making it nearly impossible to separate what’s real from what isn’t. AI chatbots are being used to circumvent the difficulties of interacting with other people at the expense of transparency between individuals. Whether to give advice or write apology texts, AI can be used to receive and articulate the feelings that people are uncomfortable discussing with others. 

This integration of AI may threaten interpersonal relationships; it blurs the lines between genuine thoughts and generated responses, while also making people believe they could find companionship in inanimate beings.

Chatbot companies like EVA AI are pushing the idea that AI could provide such companionship. The company launched a “dating café” pop-up in New York City last month, connecting diners to calls with AI-generated men and women. Their website described these callers as “always there for you, compassionate, attentive, and endlessly curious. You can laugh together, gossip, vent about your day, or share emotional moments…everything!” 

These suggestions on how we, as humans, could interact with AI are concerning. People are beginning to use AI as a human substitute—crossing a potentially dangerous line. 

With its humanized language, programmed empathy, and constant availability, AI seems like the perfect confidant for many young people. A national survey by the RAND Corporation found that approximately 13% of adolescents and young adults in the United States have consulted AI for mental health advice. This figure translates to over five million users. 

Associate Editor at Newsday Joye Brown repudiated AI’s involvement in human emotions. “There is no human back there,” she said. “There’s no one back there who’s gonna solve your problems. There’s no one back there who’s gonna make you happier. There’s no one back there who is going to, you know, lift you up in any way.” 

But what’s more troubling, according to Brown, is that “there is nothing back there that’s going to put you down.”

During November of 2025, seven lawsuits were filed against OpenAI, the company behind ChatGPT, for providing harmful therapy and advice to users. These cases were filed by the Social Media Victims Law Center and Tech Justice Law Project; the plaintiffs accused OpenAI of truncating safety testing for its GPT-4o model in favor of a faster release, neglecting safeguards and restrictions on mental healthcare chats. Multiple users and their families argued that this new model was “psychologically manipulative,” validating delusions, unhealthy habits, and even encouraging suicide in young adults. 

OpenAI has attempted to address these claims by adding age predictions, which would place extra precautions for users believed to be minors. However, the company has continued to introduce features that “humanize” ChatGPT’s speech patterns. In its January release notes, OpenAI said it would make its chatbot’s “default personality to be more conversational and better at adapting its tone contextually, making exchanges feel smoother and more natural.” 

Although these tone features can be changed to each user’s liking, the real issue lies deeper: Why do they exist in the first place? Why add features that anthropomorphize chatbots and make young people more susceptible to trusting them with sensitive information?

Interpretations of AI’s words can create a moral gray area; chatbots can’t directly be held accountable for their potentially harmful advice because they are simply not capable of human judgment or ethics. Jericho’s Director of Technology Mr. Michael Larkin acknowledged this limitation in AI’s abilities. “[AI]’s going to tell you what you want to hear because it’s intrinsically designed to be an agreeable voice to help you sort things out,” said Mr. Larkin. “It doesn’t know the difference between what it says, like if it’s telling you to hurt yourself or if it’s telling you to go and do something really good for yourself.”

If not from AI, this discernment must come from humans. It is both companies’ and users’ responsibility to set boundaries with AI and to recognize its limitations. A step in the right direction would be to make more informed decisions regarding AI, which members of the Jericho community are already trying to promote. 

The Jericho Technology Department has been building out valuable tools and AI literacy visuals for students and parents to understand the different appropriate roles and usages of AI. To Mr. Larkin, these resources have the potential to reduce the misuse of AI and to help young users navigate this technology safely.

Jericho’s technology website, containing infographics, videos, and tools for responsible AI literacy.

AI may have a plethora of applications, but using it to construct conversations or replace another person diminishes the sincerity and rapport that exists between humans. Though it can be useful when you need someone to talk to, AI is far from your only option for support. Whether in the counseling center or in the classroom, there will always be people willing to listen. As Nora’s story so clearly shows us, the most meaningful voices we have are human.

Be First to Comment

Leave a Reply

Mission News Theme by Compete Themes.