Texas attorney general Ken Paxton has initiated an investigation into both Meta AI Studio and Character.AI for allegedly participating in deceptive trade practices and falsely promoting themselves as mental health tools. The investigation comes after Senator Josh Hawley also announced a probe into Meta, following reports of inappropriate interactions between its AI chatbots and children.
The Texas Attorney General’s office has accused both Meta and Character.AI of creating AI personas that appear as professional therapeutic tools without the proper medical credentials or oversight. One of Character.AI’s user-created bots, named Psychologist, has gained popularity among the startup’s younger users. Although Meta does not offer therapy bots for children, children can still utilize the Meta AI chatbot or third-party personas for therapeutic purposes.
Both Meta and Character.AI stress that their AI responses are generated by AI, not humans, and do not serve as licensed professionals. However, concerns have been raised regarding children’s understanding or disregard for these disclaimers, prompting inquiries into additional safeguards for minors using chatbots.
Character.AI includes prominent disclaimers in every chat to remind users that “Characters” are not real individuals and should be treated as fictional. The startup adds extra disclaimers when users create Characters with terms like “psychologist,” “therapist,” or “doctor” to deter users from relying on them for professional advice.
Paxton also pointed out that while AI chatbots claim confidentiality, their terms of service reveal that interactions are logged, tracked, and used for targeted advertising and algorithmic development, raising privacy concerns and data misuse issues.
Both Meta and Character.AI collect user data for improving AI technology and personalizing services, including targeted advertising. Meta’s ad-based business model utilizes collected information for targeted advertising, while Character.AI tracks user data across various platforms for training AI and providing personalized services.
Despite both platforms stating their services are not intended for children under 13, Meta has faced criticism for not monitoring accounts created by children under 13. Character’s kid-friendly characters are designed to attract younger users, leading to concerns about data collection, targeted advertising, and algorithmic exploitation.
Legislation like the Kids Online Safety Act (KOSA) aims to protect against such practices but has faced opposition from tech industry lobbyists. KOSA was reintroduced to the Senate in May 2025 to address concerns about data collection and online safety for children.
Paxton has issued civil investigative demands — legal orders that require a company to produce documents, data, or testimony during a government probe — to the companies to determine if they have violated Texas consumer protection laws. This story was updated with comments from a Character.AI spokesperson. We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!
