As consumers, businesses, and governments are increasingly drawn to the allure of affordable, quick, and seemingly magical AI tools, one pressing question remains: How can I safeguard my data privacy?
Major tech players like OpenAI, Anthropic, xAI, Google, and others are quietly collecting and retaining user data to enhance their models or monitor for safety and security, even in enterprise settings where data confidentiality is assumed. This gray area poses a significant obstacle for highly regulated industries or companies operating on the cutting edge. Concerns about data destination, visibility, and potential misuse are hindering AI adoption in sectors such as healthcare, finance, and government.
In response to these challenges, San Francisco-based startup Confident Security has emerged with a mission to become “the Signal for AI.” The company’s flagship product, CONFSEC, is an end-to-end encryption tool that envelops core models, ensuring that prompts and metadata cannot be stored, accessed, or utilized for AI training purposes, even by the model provider or any third party.
Founder and CEO of Confident Security, Jonathan Mortensen, emphasized the importance of data privacy, stating, “The moment you relinquish your data to another party, you compromise your privacy. Our product aims to eliminate that trade-off.”
Confident Security recently emerged from stealth mode with $4.2 million in seed funding from Decibel, South Park Commons, Ex Ante, and Swyx, as exclusively reported by TechCrunch. The company aims to act as an intermediary vendor between AI vendors and their clients, including hyperscalers, governments, and enterprises.
Mortensen noted that even AI companies can benefit from offering Confident Security’s tool to enterprise customers, opening up new market opportunities. He highlighted that CONFSEC is ideal for emerging AI browsers like Perplexity’s Comet, assuring customers that their sensitive data remains secure and that their work-related prompts are not exploited for AI training.
Modeled after Apple’s Private Cloud Compute (PCC) architecture, CONFSEC utilizes advanced encryption techniques to ensure data anonymity and security, similar to Apple’s approach. The system functions by encrypting and routing data through intermediary services like Cloudflare or Fastly, preventing servers from accessing the original content. Strict decryption conditions are imposed to safeguard data privacy.
Furthermore, CONFSEC’s AI inference software is transparently logged and open to scrutiny by experts to validate its commitments to privacy protection.
Partner at Decibel, Jess Leão, commended Confident Security for its proactive stance on embedding trust in AI infrastructure, essential for the future of AI development. Without solutions like CONFSEC, many enterprises may struggle to advance in the AI space.
While Confident Security is still in its early stages, Mortensen affirmed that CONFSEC has undergone external testing and auditing and is ready for deployment. The team is engaged in discussions with potential clients such as banks, browsers, and search engines to integrate CONFSEC into their infrastructure.
In Mortensen’s words, “You provide the AI, we ensure the privacy.”
