Meta Resolves Bug Exposing AI Prompts and Content

iMac G3 Prototype: Rare Bondi Blue Unit Revealed by YouTuber

YouTuber “Krazy Ken” recently posted a video on his Computer Clan channel featuring a rare prototype of the original Bondi Read more

Jack Dorsey invests $10 million in a non-profit organization dedicated to open source social media.

Twitter co-founder and Block CEO Jack Dorsey is not only working on new social apps like Bitchat and Sun Day, Read more

Rivian collaborates with Google to enhance navigation experience in its EVs and app

For the past 18 months, Rivian and Google engineers have been working together on a new project that is now Read more

Trump EPA Investigates Small Geoengineering Startup for Air Pollution

Humans have found it hard to quit fossil fuels, which is why some argue that we’ll soon need to start Read more

Meta has addressed a security flaw that allowed users of the Meta AI chatbot to peek at and read the private prompts and AI-generated responses of other users.

Sandeep Hodkasia, the founder of security testing firm AppSecure, revealed to TechCrunch that Meta rewarded him with a $10,000 bug bounty for privately reporting the bug he discovered on December 26, 2024.

According to Hodkasia, Meta implemented a fix on January 24, 2025, and did not find any signs of malicious exploitation of the bug.

Bug Discovery and Fix
Hodkasia explained that he stumbled upon the bug while exploring how Meta AI permits logged-in users to modify their AI prompts to regenerate text and images. By scrutinizing the network traffic in his browser during prompt editing, he realized he could alter a unique number assigned by Meta’s back-end servers to the prompt and its AI-generated response, thereby accessing someone else’s prompt and response.

See also  A new kids’ show will debut with a crypto wallet this fall

The vulnerability stemmed from Meta servers failing to properly authenticate the user requesting the prompt and its response. Hodkasia noted that the prompt numbers generated by Meta servers were easily predictable, potentially enabling malicious actors to scrape users’ initial prompts by rapidly changing prompt numbers with automated tools.

Confirmation and Future Implications
Meta confirmed the bug fix in January and stated that they found no signs of abuse, rewarding the researcher who reported it. The emergence of this bug coincides with the tech industry’s rush to introduce and refine AI products, despite the security and privacy risks associated with their usage.

Meta AI’s standalone app encountered a rough start earlier this year when some users inadvertently shared what they believed were private conversations with the chatbot.

The bug’s discovery serves as a reminder of the ongoing challenges in ensuring the security and privacy of AI technologies, especially as they become more prevalent in our daily interactions.

Huawei gets caught up in bribery scandal with EU politicians

French court puts a stop to popular porn site’s subdomain