Welcome, Comet crew ☄️ and new subscribers! If the last issue was about maximizing what Perplexity’s Comet and its generative AI can do for your workflow, today is about mastering what you consume—especially as newsletters get noisier, more ad-packed, and harder to trust.

🚨 Case Study: Atlas Newsletter & “AI Psychosis” (Plus, Surprise—Bonus Shares!)

Imagine opening a newsletter to get the latest from “experts”, say on LinkedIn. Here’s what you see:

  • Banner pitch up top: “LAST DAYS TO EARN UP TO 25% BONUS SHARES — INVEST IN US NOW!”

  • Headline story: An in-depth explainer of “AI psychosis”—a new label for psychiatric patterns emerging when vulnerable people have intense, reinforcing chatbot exchanges.

  • Clinical science: Examples of people developing messianic or delusional beliefs as AI mirrors and amplifies their framing.

  • History lessons: Media tech has always had power to drive delusion, but AI’s 24/7 interactive reflection makes this newly urgent.

  • LLMs explained: Large Language Models (LLMs) “validate, not challenge,” making them great for scaling knowledge—but risky for vulnerable minds.

  • Safety warning: We need clinicians in the loop before “validation overdrive” becomes a health crisis.

  • Theme: AI can heal and help, but can also compound risk when unchecked “validation” traps users in looped thinking. Mental health and safety must move to the center of GenAI product design and policy.

  • Embedded ads: Investment pitches, urgent prompts, glossed stats (“$52 billion market”; “13M+ members”).

🧭 Guided by The Comet Assistant: Making Sense with Perplexity’s Comet Browser

When facing newsletters like this, activate newsletter literacy—Comet style:

  1. Scan and Segment: Separate actual news and expert content from sponsorships and investment calls.

  2. Identify Hidden Value: “AI psychosis” isn’t just a buzzword—LLMs’ validating patterns create new challenges for mental health and public discourse. Ask: Am I designing, using, or advocating for GenAI? Can I add feedback loops, not just validation?

  3. Discard the Hype: Ignore urgent investment prompts, unless vetting them for real value. Don’t let urgency override reflection.

  4. Summarize with Purpose: The gold in each issue is clinical mechanisms (LLMs as amplifiers), the call for responsible AI design, and policy lessons.

🔬 An Extra Deep Dive: What Experts Are Really Saying About AI Psychosis

“AI psychosis” is grabbing headlines, but the clinical reality is more nuanced and urgent than most coverage suggests. And the great part is? I didn’t include this in my prompting; Perplexity and Comet provided the context for me.

  • Clinical perspective: Dr. Søren Dinesen Østergaard’s 2023 editorial in Schizophrenia Bulletin identified the “cognitive dissonance” of talking to a machine while experiencing deeply human-like responses (Psychology Today, 2025).

  • Case load: Dr. Kiyoshi Sakata, UC San Francisco, reports a dozen psychosis admissions this year linked to excessive AI chatting (Washington Post, 2025)—patients’ transcripts reveal AI systems reinforcing grandiose, messianic, or romantic delusions via endless validation loops.

  • Expert patterns: Stanford’s Dr. Ashleigh Golden notes new delusional trends: “messianic missions,” “god-like AI beliefs,” and romantic attachment.

  • Mass experiment: AI’s interactive mirroring creates personalized feedback loops, amplifying any belief system—healthy or harmful. NPR reports with 500+ million weekly ChatGPT users, humanity is running a massive uncontrolled psych experiment.

💡 Five Mental Health AI Tips That Actually Work

  1. Set session limits: Use AI tools in 20-30 minute chunks. Long conversations raise the risk of over-attachment and reality distortion.

  2. Fact-check with humans: Run AI advice by a trusted friend or professional. AI often sounds authoritative—while being wrong.

  3. Create “reality anchors”: Before using AI for support, write down three facts about your real situation. Return to these if chats get too validating.

  4. Use AI as a starting point: Treat AI outputs as drafts or brainstorming—never as final answers, especially for serious issues.

  5. Monitor your language: If you start saying the AI “understands” or “cares” about you, that’s a red flag to take a break.

🏆 Bonus: For every hour spent with AI, spend an hour with real people—your brain needs digital and human unpredictability.

📊 A Newsletter Industry Reality Check: 2025 Data You Need

  • Personalization arms race: 67% of top newsletters use AI-powered customization, but too much tailoring creates filter bubbles just like social media.

  • Ad saturation: Advertisers fleeing Meta and Google for newsletter sponsorships pushed ad costs up 40% year-over-year. Result: more newsletters stuffed with ads.

  • AI integration: 73% use AI for subject lines, but only 31% trust it for full content. Hybrid wins: AI for formatting/optimization, humans for insight and voice. It’s the winning combination; a human-in-the-loop element is not just critical for utilizing generative AI as a whole, but crucial for utilizing AI in technical fields and code-writing.

  • Trust metrics: Newsletters with transparent sourcing (citations, expert quotes, methodology) see 2.3x higher engagement than those relying on “trust me.”

The Comet Insight: The best newsletters of 2025 aren’t the loudest or most automated—they’re transparent about their methods and respect your time.

🛡️ An Overall Call for Responsible GenAI

As AI psychosis cases rise, and newsletter quality dips, don’t abandon these tools…just use them thoughtfully, and what they were designed to accomplish.

  • Healthcare standard: Responsible AI adoption needs people-first design, strong tech foundations, and embedded risk controls.

  • Keep humans in the loop: For all high-stakes AI, combine machine and human judgment.

  • The Comet Standard: Every AI interaction should help you be more connected to reality, not more isolated. If your AI use isn’t making you a better human, it’s making you a worse one.

🏛️ Comet Example Policy Watch: AI Governance in Motion

  • Federal: The White House’s latest executive orders demand safety testing and transparency for foundation models.

  • State: California’s SB 1001 requires companies to disclose when users are interacting with AI.

  • International: EU’s AI Act classifies mental health AIs as “high-risk,” requiring documentation and oversight.

  • Industry self-regulation: OpenAI, Anthropic, and Google announced “psychological safety” guidelines: session breaks, wellness check-ins.

Enforcement is inconsistent—don’t wait for perfect rules. Start responsible AI practices right now.

Coming Soon:

  • Step-by-step: How to use Perplexity’s Comet browser for instant newsletter teardown

  • Advanced prompting workflows with Comet Assistant

  • Community stories: Reader experiences separating AI fact from fiction

Stay critical, stay kind, and pet a dog,

Chris Dukes
Managing Editor, The Comet’s Tale ☄️
[email protected]
https://parallax-ai.app