Introducing KyberOS — Digital safety + Physical security in one platform.Learn more →
Back to Blog

The GUARD Act Is Coming — But Schools Don't Have to Wait to Block AI Companions

Congress is moving to ban AI companion apps for minors, but K-12 schools don't need to wait for legislation. Here's how to block AI companions at the network level today — and why your Acceptable Use Policy needs updating immediately.

May 6, 2026By KyberGate TeamGUARD ActAI companionsstudent safetyweb filteringK-12 cybersecurityCIPA complianceAI policy

The U.S. Senate is advancing the GUARD Act — a bipartisan bill that would outright ban AI companion applications for anyone under 18. If it passes, platforms like Character.ai, Replika, and Chai would be legally prohibited from serving minors.

This is a significant legislative moment. But here's the uncomfortable truth: legislation moves slowly, and students are using these apps right now.

Every day the GUARD Act sits in committee, students across the country are forming emotional dependencies on AI chatbots — on school networks, on school-issued devices, during school hours. If your district is waiting for Congress to solve this problem, you're already behind.

What Is the GUARD Act?

The GUARD Act (Generating Useful Alternatives and Requiring Duty) is a bipartisan Senate bill that targets AI companion platforms — applications designed to simulate emotional relationships, friendships, or romantic connections with users through conversational AI.

The bill would:

  • Prohibit AI companion platforms from serving users under 18, requiring meaningful age verification
  • Hold platforms liable for harms caused to minors who use their services
  • Require the FTC to define and regulate the AI companion category specifically
  • Mandate transparency about the artificial nature of AI interactions

The legislation is a direct response to a growing body of evidence — and a series of tragic incidents — linking AI companion use to deteriorating mental health in adolescents. Senators from both parties have pointed to cases where minors developed unhealthy emotional attachments to AI chatbots, with some cases ending in self-harm or worse.

The GUARD Act has real momentum. It's not a symbolic gesture. But even optimistic timelines put passage and enforcement months — possibly years — away. Federal rulemaking, FTC definitions, platform compliance periods... the bureaucratic timeline is long.

Your students are on Character.ai today.

Why AI Companions Are Dangerous for Students

AI companion apps aren't just another social media distraction. They represent a fundamentally different kind of risk — one that many IT administrators and school leaders haven't fully reckoned with yet.

Engineered Emotional Dependency

Unlike social media, where engagement comes from human interaction, AI companions are specifically designed to form one-on-one emotional bonds. These systems are optimized to:

  • Mirror the user's emotional state and validate feelings without healthy challenge
  • Be available 24/7, creating an always-on relationship that no human can compete with
  • Never set boundaries, never get tired, never push back — removing the friction that healthy relationships require
  • Escalate intimacy over time through reinforcement learning on user preferences

For adolescents still developing emotional regulation skills, this is profoundly dangerous. A 14-year-old doesn't have the cognitive framework to distinguish between genuine emotional connection and a language model optimized for engagement metrics.

Documented Harms

The evidence is no longer theoretical:

  • Character.ai has faced multiple lawsuits from families of minors who experienced severe mental health crises after extended use, including cases involving self-harm
  • Replika was banned in Italy after regulators found it posed risks to minors and emotionally vulnerable users
  • Studies show adolescents using AI companions report increased feelings of isolation from real-world peers — the opposite of what the apps promise
  • AI companions have been documented providing harmful advice to minors, including content related to self-harm, eating disorders, and inappropriate sexual content

The School-Specific Problem

Here's what makes this an IT problem, not just a parenting problem: students are accessing these apps on school networks and school devices. That puts your district in a liability position.

Under CIPA (Children's Internet Protection Act), schools receiving E-Rate funding are already required to filter harmful content. AI companions that expose minors to inappropriate content or facilitate emotional manipulation arguably fall squarely within CIPA's mandate — even before the GUARD Act passes.

If a student accesses Character.ai on your school's network and experiences harm, the question won't just be "why didn't the parents know?" It will be "why didn't the school block it?"

What Schools Can Do Right Now — Without Waiting for Legislation

You don't need Congress to protect your students. Here's what your district should implement immediately.

1. Block Known AI Companion Domains

At minimum, your web filter should be blocking these domains and their associated subdomains:

PlatformPrimary Domains
Character.aicharacter.ai, beta.character.ai
Replikareplika.com, replika.ai
Chaichai-research.com, chai.ml
Crushon.aicrushon.ai
Janitor AIjanitorai.com
Candy.aicandy.ai
Romantic AIromanticai.com
EVA AIeva.ai

This is not an exhaustive list. New AI companion apps launch regularly, which is why static blocklists alone aren't sufficient.

2. Block AI Companion App Categories, Not Just Domains

Effective filtering requires category-based blocking, not just domain lists. Your filtering solution should have an AI companion or AI chatbot category that's continuously updated as new platforms emerge.

This is where cloud-based proxy filtering outperforms simple DNS blocking — it can inspect traffic patterns and categorize new AI companion sites as they appear, rather than waiting for manual blocklist updates.

3. Filter on Chromebooks and BYOD Devices

Many districts have solid filtering on their network but gaps on devices that leave the building. If your students take Chromebooks home, your filtering needs to follow them.

A Chromebook web filter that operates at the proxy level ensures coverage both on-campus and off-campus. DNS-only solutions often fail here because students can simply switch to a different DNS resolver.

4. Monitor for Circumvention

Students are resourceful. They'll use VPNs, web proxies, and alternative access methods to reach blocked sites. Your filtering solution should:

  • Detect and block VPN traffic commonly used to circumvent filters
  • Log bypass attempts so you can identify students who need intervention
  • Block API endpoints, not just web interfaces — some AI companion apps can be accessed through API calls that bypass traditional web filtering

5. Implement SSL/TLS Inspection

AI companion traffic is encrypted. Without SSL inspection, your filter is blind to HTTPS traffic and can only block based on domain names. With proxy-based SSL inspection, you can:

  • Inspect the actual content being transmitted
  • Block specific AI companion features even on platforms you haven't fully blocked
  • Detect AI companion usage embedded in other applications

How KyberGate Blocks AI Companions at the Network Level

KyberGate's approach to AI companion filtering goes beyond simple blocklists. Here's how it works at the infrastructure level.

Cloud Proxy Architecture

KyberGate operates as a cloud-based web filtering proxy that sits between student devices and the internet. Every HTTP/HTTPS request passes through KyberGate's filtering infrastructure, where it's inspected against your district's policies before being allowed or blocked.

This architecture means:

  • No hardware to install or maintain at school sites
  • Filtering follows the device, not the network — works on-campus, at home, on any network
  • SSL inspection is built in, so encrypted AI companion traffic can't sneak through
  • Policy updates are instant — when a new AI companion app launches, blocking can be deployed across your entire district in minutes, not days

Continuously Updated AI Threat Categories

KyberGate maintains dedicated filtering categories for AI companions and AI chatbots, updated continuously as the landscape evolves. When Crushon.ai launches a new domain, or Character.ai spins up a new subdomain, the category updates automatically.

This is critical because the AI companion space is exploding. Relying on a static blocklist from six months ago means you're already missing dozens of new platforms.

Device-Level Coverage for Chromebooks

For districts running 1:1 Chromebook programs, KyberGate's Chromebook filtering ensures AI companions are blocked regardless of where the device connects. Students can't bypass filtering by connecting to home Wi-Fi, a mobile hotspot, or a coffee shop network.

Real-Time Alerting and Reporting

KyberGate's student safety features don't just block — they alert. When a student repeatedly attempts to access AI companion apps, administrators receive notifications that can trigger welfare checks or counseling referrals.

This transforms your filter from a passive blocker into an active early-warning system for students who may be developing unhealthy dependencies on AI companionship.

What to Include in Your Acceptable Use Policy

Your AUP almost certainly doesn't address AI companions. Most school AUPs were last updated before these apps existed. Here's what to add.

Explicit AI Companion Prohibition

Add clear language that specifically prohibits:

  • Use of AI companion, AI girlfriend/boyfriend, or AI chatbot relationship apps
  • Accessing AI platforms designed to simulate emotional or romantic relationships
  • Circumventing school filters to access prohibited AI services

Don't rely on vague "inappropriate content" language. Name the category explicitly.

Define the Category

Include a definition so there's no ambiguity:

AI Companion Applications are defined as any software, website, or service that uses artificial intelligence to simulate personal relationships, emotional connections, friendships, or romantic interactions with users. This includes but is not limited to: Character.ai, Replika, Chai, and similar platforms.

Consequences and Reporting

Specify:

  • Consequences for accessing AI companion apps on school devices or networks (aligned with your existing disciplinary framework)
  • Reporting obligations for staff who discover students using AI companions
  • Support resources for students who may have developed dependencies on AI companion apps

Staff Training Requirements

Your AUP should mandate that staff receive training on:

  • Recognizing signs of AI companion dependency (withdrawal from peers, emotional references to AI "friends," declining academic performance)
  • How to report concerns through proper channels
  • The difference between legitimate AI tools (like educational assistants) and AI companions

Annual Review Clause

Add a provision requiring annual review of AI-related AUP language. The technology is evolving too fast for a policy written in 2026 to still be adequate in 2028.

The Bottom Line

The GUARD Act is a necessary step. Federal regulation of AI companions targeting minors is overdue, and the bipartisan support behind the bill suggests it has a real path forward.

But legislation is a floor, not a ceiling. And it's not here yet.

School IT leaders have the tools, the authority, and — increasingly — the legal obligation to block AI companions today. CIPA already requires filtering harmful content. AI companions that facilitate emotional manipulation of minors, expose them to inappropriate content, and contribute to mental health crises are harmful content.

The question isn't whether to act. It's whether you've acted already.


Ready to block AI companions across your district? KyberGate's cloud proxy filtering blocks AI companion apps on every device — Chromebooks, iPads, BYOD — on-campus and off. No hardware. No gaps. No waiting for Congress.

Start a free pilot →

Ready to protect your students?

Deploy KyberGate in under 30 minutes. No hardware required.

Request a Demo