This Week in AI: Generative AI Not as Harmful as Feared

Bitcoin reaches new all-time high of over $118,000 within 24 hours

Bitcoin reached a new all-time high of $118,900 on Friday, surpassing its previous record of $113,822 set on Thursday. As Read more

Conveyor Revolutionizes Vendor Security Reviews and RFPs with AI

Selling software to companies can be a daunting task, especially when it comes to meeting security requirements. Chas Ballew, founder Read more

Ready-made Stem Cell Therapies in Development for Pets

Earlier this week, San Diego startup Gallant announced $18 million in funding to bring the first FDA-approved ready-to-use stem cell Read more

Elon Musk’s Dodgy Election Claims Have Gone Viral with 2 Billion Views on X

The world’s richest man buys out one of the most popular social media platforms and uses it as a propaganda Read more

In a recent study, researchers from the University of Bath and University of Darmstadt found that generative AI models, like those in Meta’s Llama family, cannot learn independently or acquire new skills without explicit instruction. While these models can follow instructions superficially, they struggle to master new skills on their own.

Key Findings

The researchers’ study challenges the prevailing narrative that generative AI poses a threat to humanity. They argue that these technologies are not as harmful as feared and that focusing on imaginary, world-ending scenarios can lead to regrettable policymaking decisions.

Implications for Investors

As investors continue to pour billions into generative AI, it is essential to consider the potential risks associated with these technologies. While generative AI may not lead to humanity’s extinction, it has already caused harm in other ways, such as the spread of deepfake porn, wrongful facial recognition arrests, and exploitation of underpaid data annotators.

See also  Bitcoin reaches new all-time high of over $118,000 within 24 hours

News

– Google announced updates to its Gemini assistant at the Made By Google event.
– A class action lawsuit against Stability AI, Runway AI, and DeviantArt for copyright infringement moves forward.
– X, owned by Elon Musk, faces privacy complaints for data processing without consent.
– YouTube is testing an integration with Gemini for video brainstorming.
– OpenAI’s GPT-4o model, trained on voice, text, and image data, exhibits unusual behaviors.

Research Paper of the Week

A study by UPenn researchers found that AI text detectors are largely ineffective at detecting text generated by AI models. The researchers designed a dataset to evaluate the performance of AI text detectors and found them to be mostly useless, highlighting the challenges in detecting AI-generated text.

Model of the Week

MIT researchers developed SigLLM, a framework that uses generative AI to detect anomalies in complex systems like wind turbines. While the framework’s performance is not exceptional, it shows potential for improving anomaly detection tasks in the future.

Grab Bag

OpenAI upgraded ChatGPT with a new base model, but released limited information about the changes. Transparency in AI models is essential for building trust with users, and OpenAI should prioritize transparency in its communication about model updates.

Study Reveals AI Models Still Hallucinate Frequently

Trump Administration’s Agreement Blocks Intel’s Sale of Foundry Division