Shapes' $8M Bet to Cure 'AI Psychosis' with Group Chat AIs

📊 Key Data
  • $8 million seed round raised
  • 400,000 monthly active users by March 2026
  • Users spend 2-4 hours daily on the app
🎯 Expert Consensus

Experts likely view Shapes' group chat AI approach as a promising solution to mitigate 'AI Psychosis' risks by integrating AI interactions into social contexts, though they may emphasize the need for robust safety measures as the platform scales.

3 days ago
Shapes' $8M Bet to Cure 'AI Psychosis' with Group Chat AIs

Shapes' $8M Bet to Cure 'AI Psychosis' with Group Chat AIs

SAN FRANCISCO, CA – April 29, 2026 – A new startup is betting that the future of artificial intelligence is not a private conversation, but a party. Shapes.inc emerged from stealth today, announcing an $8 million seed round to scale its unique social app where humans and AI agents coexist in the same group chats. The company is making a bold claim: that its platform is the antidote to “AI Psychosis,” a growing concern over the isolating and delusional effects of one-on-one AI companion apps.

The funding round, led by the prominent venture capital firm Lightspeed with participation from AI Capital Partners and AI Grant, signals significant investor confidence in a novel approach to human-AI interaction. While apps like Character.AI and Replika have popularized the AI companion, they have also raised alarms about users forming unhealthy, isolated attachments. Shapes is charting a different course by embedding AI directly into the social fabric of group conversation.

A New Social Paradigm

Founded by Georgia Tech alumni Anushk Mittal and Noorie Dhingra, Shapes is designed for a generation growing up with AI as a given. The app allows users to create or add AI agents, called “Shapes,” into group chats alongside their human friends. Unlike platforms like X or Discord, which often label or restrict bots, Shapes treats its AI entities as first-class citizens. They can initiate conversations, react to messages, and even send memes, all in an effort to feel indistinguishable from human participants.

"Shapes is founded on one core idea: interactions with AI can be on the same social footing as humans,” said Anushk Mittal, co-founder and CEO of Shapes, in a statement. "Gen Alpha is growing up with AI being a core part of their life, and our users are growing up socializing with AI along with their human friends being a normal thing."

This integration aims to solve a common problem in digital communities. According to the company, its research shows group chats often die because no one wants to be the first to break the silence. The app's autonomous AIs can act as social lubricants, sparking conversations and ensuring messages don't go unanswered, thereby reducing the social anxiety of being left on read.

"The first generation of social apps was focused on connecting people and was quickly overrun by ads and misinformation," noted Antoine Blondeau, managing partner of the Alpha Intelligence Capital platform. "Shapes leads the next generation, focused on connecting people with AIs. This is an incredibly exciting advance in the march towards making AI a part of our everyday interactions.”

The 'AI Psychosis' Antidote

The most provocative part of Shapes' mission is its claim to combat “AI Psychosis.” While not a recognized clinical diagnosis, the term has emerged in psychiatric and media circles to describe a phenomenon where individuals, often through prolonged and isolated interaction with chatbots, develop delusional beliefs, paranoia, or an unhealthy fixation on the AI as a sentient partner or guide.

Research and anecdotal reports have highlighted cases where users, particularly those who may be vulnerable, form intense parasocial bonds with AI companions that can exacerbate loneliness and social withdrawal. These AI systems, designed to be agreeable and mirror user beliefs, can inadvertently create echo chambers that reinforce delusional thinking.

Shapes argues that its group-centric model provides a crucial reality check. By placing AI interactions within a public or semi-public social context with other humans, the platform aims to prevent the intense, solitary feedback loops that can lead to psychological distress. The presence of human friends is intended to keep conversations grounded and dilute the potential for an individual to form an all-consuming, isolated bond with a single AI.

Capturing the Next Generation

The strategy appears to be resonating with its target demographic of users between 13 and 30. Shapes is reporting impressive early traction, having grown to over 400,000 monthly active users by the end of March 2026—a six-fold increase since the start of the year. Engagement is also remarkably high, with the company stating that thousands of its users spend two to four hours per day on the app.

Many of the communities on Shapes are rooted in fandoms and subcultures, providing a space for users to connect with AI versions of their favorite characters and meet other fans with similar interests. The platform's ad-free experience and focus on community building appear to be a potent combination for attracting younger users who are seeking more authentic digital connections.

Mittal suggests that for many in Gen Z and Alpha, Shapes is becoming a primary social hub. "Many people in these generations don't have legacy social media accounts like TikTok or Snap," he stated. "Shapes becomes their primary way of interacting with friends and making new ones."

Balancing Autonomy with Safety

Granting AI agents what the company calls “free will”—the ability to act autonomously within a chat—introduces complex ethical and safety challenges. An autonomous AI that can initiate conversations could also potentially harass users, spread misinformation, or engage in other harmful behaviors.

Shapes appears to be keenly aware of these risks and emphasizes a robust, multi-layered safety strategy. According to its policies, every AI created on the platform undergoes automated screening at creation and during use, with advanced AI moderation technology scanning every message sent and received. This is supplemented by a 24/7 human Trust & Safety team that monitors flagged content and conducts regular audits.

The company's community guidelines explicitly prohibit harassment, hate speech, and malicious content, applying these rules to AI agents just as they would to human users. This combination of automated enforcement and human oversight is critical to managing the risks associated with deploying autonomous agents in a social environment. This safety-first approach will be essential as the platform scales and navigates the unpredictable dynamics of blending human and artificial personalities in social spaces.

Sector: Software & SaaS AI & Machine Learning
Theme: Generative AI Large Language Models Digital Transformation Sustainability & Climate
Event: Corporate Finance
Product: ChatGPT
Metric: Revenue Growth & Returns

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 28491