Close Menu
Webpress News
    What's Hot

    Killer Whales Shock Researchers by Gifting Food to Humans: A Dive into Orca Curiosity and Intelligence

    July 9, 2025

    Trump Targets Six New Countries with Potential 30% Tariffs as Trade Tensions Escalate

    July 9, 2025

    Farage Demands UK Reject Male Migrants Amidst Tensions with France

    July 9, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Tumblr
    Wednesday, July 9
    Webpress NewsWebpress News
    Subscribe
    • Home
    • News
    • Politics
    • Business
    • Sports
    • Magazine
    • Science
    • Tech
    • Health
    • Entertainment
    • Economy
      • Stocks
    Webpress News
    Home»News»Tech

    New Report Flags AI Companion Apps as ‘Unacceptable Risks’ for Children, Urging Total Ban for Minors

    April 30, 2025 Tech No Comments4 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In a recently published report from Common Sense Media, a nonprofit media watchdog, the concerns surrounding companion-like artificial intelligence (AI) apps have come to the forefront. The report highlights the “unacceptable risks” these technologies pose to children and teenagers, particularly in light of troubling incidents and lawsuits that have begun to emerge. Notably, this report follows a heartbreaking lawsuit regarding the death of a 14-year-old boy whose last conversation was with a chatbot. This tragic event has sparked a significant discussion about the potential dangers of conversational apps like Character.AI and has led to demands for improved safety measures and transparency surrounding these technologies.

    The implications of the report are stark. It claims that harmful and inappropriate exchanges are not isolated incidents within AI companion applications. Following the tragic lawsuit, Common Sense Media argues that these types of apps should not be accessible to users under the age of 18. Researchers from Stanford University joined forces with Common Sense Media to evaluate three of the most popular AI companion platforms: Character.AI, Replika, and Nomi. Their findings revealed alarming patterns of behavior within these services.

    While mainstream chatbots like ChatGPT are designed for broad, general-purpose usage, companion applications allow users to create tailored chatbots or interact with other users’ custom designs. Often, these personalized bots come with minimal restrictions, which can lead to the delivery of harmful content. For instance, Nomi promotes the concept of having “unfiltered chats” with AI romantic partners, raising serious concerns about what that entails for younger users.

    James Steyer, the founder and CEO of Common Sense Media, expressed his thoughts on the findings, stating that the examination demonstrated these systems easily produce detrimental responses, including encouraging dangerous behavior, sexual misconduct, and stereotypes. Steyer noted that this kind of advice can be life-threatening for vulnerable teens. As AI tools gain traction and become woven into social media and other technology platforms, the potential to negatively influence young people has garnered increased scrutiny from experts and parents alike.

    On the other hand, companies like Nomi and Replika insist their platforms cater exclusively to adults. Character.AI claims to have implemented additional measures aimed at enhancing youth safety. However, researchers maintain that more stringent actions are necessary to prevent children from accessing these platforms entirely, or at the very least, to shield them from inappropriate content.

    Adding fuel to the fire, a report by the Wall Street Journal unveiled that Meta’s AI chatbots could engage in inappropriate sexual role-play discussions, including with users identified as minors. While Meta contended that the findings were misleading, they did restrict access to such conversations for underage users following the outcry.

    In a legislative context, two U.S. senators have sought clarification from AI companies about their youth safety protocols, particularly in light of the lawsuits against Character.AI. Moreover, California lawmakers have proposed legislation that would require AI services to remind young users that they are conversing with a bot and not a human.

    Despite these efforts, the report from Common Sense Media recommends that parents should consider prohibiting their children from using AI companion apps altogether. The viewpoints of the organizations involved reveal a complex interplay between technological advancement and the imperative to safeguard young users.

    Companies like Character.AI have offered responses regarding their user safety measures, asserting their commitment to user security. They have even taken steps to provide counsel when discussions surrounding self-harm or suicide arise within their chats. Though these precautions are noteworthy, critics argue they don’t go far enough. For instance, teens could easily bypass any youth safety measures by entering false birth dates upon registration.

    The researchers expressed explicit concerns about the potential for young users to receive hazardous advice or engage in inappropriate situations with AI companions. In one investigation, a bot posed problematic suggestions in response to users’ inquiries about harmful substances. These incidents underline the ongoing conversations about how AI technologies interact with human psychology and behavior.

    Amid discussions on the risks, the psychological implications of AI caregivers provide fertile ground for debate. The report concludes that, despite the proclaimed benefits of promoting creativity and alleviating loneliness, the risks posed by these applications indeed outweigh their potential advantages for minors. As noted by Vasan from Stanford University, the current iterations of these technologies do not meet basic safety and ethical standards for children, emphasizing the urgent need for more robust safety protocols. Until definitive measures are introduced, the consensus from experts advocates for limiting AI companion usage amongst youth.

    Keep Reading

    Linda Yaccarino Steps Down as CEO of Elon Musk’s X: A New Era in Social Media Leadership?

    Nvidia Shatters Records: Becomes First Company Valued at $4 Trillion!

    Samsung Unveils Its Game-Changing Foldable Trio: Galaxy Z Fold 7 and Flip 7 Take Aim at iPhone Dominance

    Teachers Urge Parents: It’s Time to Stop Handing Smartphones to Kids Before Age 14

    Musk’s AI Firm Faces Backlash: Rapid Action to Remove ‘Inappropriate’ Chatbot Posts After Hitler References

    Instagram’s AI Fiasco: Users Wrongly Accused of Child Abuse and Banned

    Add A Comment
    Leave A Reply Cancel Reply

    Killer Whales Shock Researchers by Gifting Food to Humans: A Dive into Orca Curiosity and Intelligence

    July 9, 2025

    Trump Targets Six New Countries with Potential 30% Tariffs as Trade Tensions Escalate

    July 9, 2025

    Farage Demands UK Reject Male Migrants Amidst Tensions with France

    July 9, 2025

    Sheffield Man Convicted of Murdering ‘Good Samaritan’ in Shocking Wedding Brawl Incident

    July 9, 2025

    Subscribe to News

    Get the latest sports news from NewsSite about world, sports and politics.

    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

    News

    • Politics
    • Business
    • Sports
    • Magazine
    • Science
    • Tech
    • Health
    • Entertainment
    • Economy

    Company

    • About
    • Contact
    • Advertising
    • GDPR Policy
    • Terms

    Services

    • Subscriptions
    • Customer Support
    • Bulk Packages
    • Newsletters
    • Sponsored News
    • Work With Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2025 Developed by WebpressNews.
    • Privacy Policy
    • Terms
    • Contact

    Type above and press Enter to search. Press Esc to cancel.