In a recent study, researchers from the University of Bath and University of Darmstadt found that generative AI models, like those in Meta’s Llama family, cannot learn independently or acquire new skills without explicit instruction. While these models can follow instructions superficially, they struggle to master new skills on their own.
Key Findings
The researchers’ study challenges the prevailing narrative that generative AI poses a threat to humanity. They argue that these technologies are not as harmful as feared and that focusing on imaginary, world-ending scenarios can lead to regrettable policymaking decisions.
Implications for Investors
As investors continue to pour billions into generative AI, it is essential to consider the potential risks associated with these technologies. While generative AI may not lead to humanity’s extinction, it has already caused harm in other ways, such as the spread of deepfake porn, wrongful facial recognition arrests, and exploitation of underpaid data annotators.
News
– Google announced updates to its Gemini assistant at the Made By Google event.
– A class action lawsuit against Stability AI, Runway AI, and DeviantArt for copyright infringement moves forward.
– X, owned by Elon Musk, faces privacy complaints for data processing without consent.
– YouTube is testing an integration with Gemini for video brainstorming.
– OpenAI’s GPT-4o model, trained on voice, text, and image data, exhibits unusual behaviors.
Research Paper of the Week
A study by UPenn researchers found that AI text detectors are largely ineffective at detecting text generated by AI models. The researchers designed a dataset to evaluate the performance of AI text detectors and found them to be mostly useless, highlighting the challenges in detecting AI-generated text.
Model of the Week
MIT researchers developed SigLLM, a framework that uses generative AI to detect anomalies in complex systems like wind turbines. While the framework’s performance is not exceptional, it shows potential for improving anomaly detection tasks in the future.
Grab Bag
OpenAI upgraded ChatGPT with a new base model, but released limited information about the changes. Transparency in AI models is essential for building trust with users, and OpenAI should prioritize transparency in its communication about model updates.
