A lawyer representing Anthropic issued an apology after an erroneous legal citation, created by the company’s AI chatbot named Claude, was used in an ongoing legal battle with music publishers. The incident was revealed in a filing submitted to a Northern California court.
### Claude’s Hallucination
According to the filing, Claude generated a citation with inaccurate information, including an incorrect title and authors. Anthropic’s legal team acknowledged that their manual check failed to detect the error, along with several other inaccuracies resulting from Claude’s hallucinations.
### Apology and Explanation
Anthropic expressed regret for the mistake, describing it as an honest citation error rather than an intentional deception of authority. The company clarified that the error was unintentional.
### Legal Ramifications
The issue arose when lawyers representing Universal Music Group and other music publishers accused Anthropic’s expert witness, Olivia Chen, of using Claude to cite fake articles in her testimony. As a result, a federal judge ordered Anthropic to address these allegations.
The lawsuit between music publishers and tech companies over the use of generative AI tools continues to highlight the complexities of copyright disputes in the digital age. Despite these challenges, startups like Harvey are still securing substantial funding to automate legal processes.
[Original Source](inserirlinkaqui)
