Meta has addressed a security flaw that allowed users of the Meta AI chatbot to peek at and read the private prompts and AI-generated responses of other users.
Sandeep Hodkasia, the founder of security testing firm AppSecure, revealed to TechCrunch that Meta rewarded him with a $10,000 bug bounty for privately reporting the bug he discovered on December 26, 2024.
According to Hodkasia, Meta implemented a fix on January 24, 2025, and did not find any signs of malicious exploitation of the bug.
Bug Discovery and Fix
Hodkasia explained that he stumbled upon the bug while exploring how Meta AI permits logged-in users to modify their AI prompts to regenerate text and images. By scrutinizing the network traffic in his browser during prompt editing, he realized he could alter a unique number assigned by Meta’s back-end servers to the prompt and its AI-generated response, thereby accessing someone else’s prompt and response.
The vulnerability stemmed from Meta servers failing to properly authenticate the user requesting the prompt and its response. Hodkasia noted that the prompt numbers generated by Meta servers were easily predictable, potentially enabling malicious actors to scrape users’ initial prompts by rapidly changing prompt numbers with automated tools.
Confirmation and Future Implications
Meta confirmed the bug fix in January and stated that they found no signs of abuse, rewarding the researcher who reported it. The emergence of this bug coincides with the tech industry’s rush to introduce and refine AI products, despite the security and privacy risks associated with their usage.
Meta AI’s standalone app encountered a rough start earlier this year when some users inadvertently shared what they believed were private conversations with the chatbot.
The bug’s discovery serves as a reminder of the ongoing challenges in ensuring the security and privacy of AI technologies, especially as they become more prevalent in our daily interactions.
